Dec 12 18:45:14.887303 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 12 18:45:14.887340 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:45:14.887359 kernel: BIOS-provided physical RAM map: Dec 12 18:45:14.887369 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 12 18:45:14.887380 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Dec 12 18:45:14.887391 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Dec 12 18:45:14.887404 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Dec 12 18:45:14.887416 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Dec 12 18:45:14.887427 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Dec 12 18:45:14.887438 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Dec 12 18:45:14.887449 kernel: NX (Execute Disable) protection: active Dec 12 18:45:14.887465 kernel: APIC: Static calls initialized Dec 12 18:45:14.887486 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Dec 12 18:45:14.887498 kernel: extended physical RAM map: Dec 12 18:45:14.887531 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 12 18:45:14.887544 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Dec 12 18:45:14.887559 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Dec 12 18:45:14.887571 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Dec 12 18:45:14.887584 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Dec 12 18:45:14.887597 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Dec 12 18:45:14.887610 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Dec 12 18:45:14.887623 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Dec 12 18:45:14.887636 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Dec 12 18:45:14.887648 kernel: efi: EFI v2.7 by EDK II Dec 12 18:45:14.887661 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77002518 Dec 12 18:45:14.887673 kernel: secureboot: Secure boot disabled Dec 12 18:45:14.887685 kernel: SMBIOS 2.7 present. Dec 12 18:45:14.887700 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 12 18:45:14.887713 kernel: DMI: Memory slots populated: 1/1 Dec 12 18:45:14.887725 kernel: Hypervisor detected: KVM Dec 12 18:45:14.887737 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Dec 12 18:45:14.887749 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 12 18:45:14.887762 kernel: kvm-clock: using sched offset of 5226599540 cycles Dec 12 18:45:14.887775 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 18:45:14.887788 kernel: tsc: Detected 2499.998 MHz processor Dec 12 18:45:14.887801 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 18:45:14.887813 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 18:45:14.887828 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Dec 12 18:45:14.887842 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 12 18:45:14.887854 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 18:45:14.887873 kernel: Using GB pages for direct mapping Dec 12 18:45:14.887886 kernel: ACPI: Early table checksum verification disabled Dec 12 18:45:14.887899 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Dec 12 18:45:14.887913 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Dec 12 18:45:14.887930 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 12 18:45:14.887944 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 12 18:45:14.887957 kernel: ACPI: FACS 0x00000000789D0000 000040 Dec 12 18:45:14.887970 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 12 18:45:14.887984 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 12 18:45:14.887997 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 12 18:45:14.888010 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 12 18:45:14.888024 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 12 18:45:14.888040 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 12 18:45:14.888053 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 12 18:45:14.888067 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Dec 12 18:45:14.888080 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Dec 12 18:45:14.888094 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Dec 12 18:45:14.888107 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Dec 12 18:45:14.888120 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Dec 12 18:45:14.888133 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Dec 12 18:45:14.888149 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Dec 12 18:45:14.888162 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Dec 12 18:45:14.888176 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Dec 12 18:45:14.888189 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Dec 12 18:45:14.888203 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Dec 12 18:45:14.888216 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Dec 12 18:45:14.888229 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 12 18:45:14.888243 kernel: NUMA: Initialized distance table, cnt=1 Dec 12 18:45:14.888256 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Dec 12 18:45:14.888270 kernel: Zone ranges: Dec 12 18:45:14.888286 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 18:45:14.888300 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Dec 12 18:45:14.888313 kernel: Normal empty Dec 12 18:45:14.888327 kernel: Device empty Dec 12 18:45:14.888340 kernel: Movable zone start for each node Dec 12 18:45:14.888354 kernel: Early memory node ranges Dec 12 18:45:14.888367 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 12 18:45:14.888380 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Dec 12 18:45:14.888393 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Dec 12 18:45:14.888409 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Dec 12 18:45:14.888423 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:45:14.888437 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 12 18:45:14.888450 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Dec 12 18:45:14.888464 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Dec 12 18:45:14.888477 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 12 18:45:14.888490 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 12 18:45:14.888504 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 12 18:45:14.888595 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 12 18:45:14.888612 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 18:45:14.888626 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 12 18:45:14.888639 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 12 18:45:14.888653 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 18:45:14.888666 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 12 18:45:14.888680 kernel: TSC deadline timer available Dec 12 18:45:14.888693 kernel: CPU topo: Max. logical packages: 1 Dec 12 18:45:14.888707 kernel: CPU topo: Max. logical dies: 1 Dec 12 18:45:14.888720 kernel: CPU topo: Max. dies per package: 1 Dec 12 18:45:14.888733 kernel: CPU topo: Max. threads per core: 2 Dec 12 18:45:14.888749 kernel: CPU topo: Num. cores per package: 1 Dec 12 18:45:14.888763 kernel: CPU topo: Num. threads per package: 2 Dec 12 18:45:14.888776 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 12 18:45:14.888790 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 12 18:45:14.888803 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Dec 12 18:45:14.888817 kernel: Booting paravirtualized kernel on KVM Dec 12 18:45:14.888830 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 18:45:14.888844 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 12 18:45:14.888857 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 12 18:45:14.888873 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 12 18:45:14.888886 kernel: pcpu-alloc: [0] 0 1 Dec 12 18:45:14.888899 kernel: kvm-guest: PV spinlocks enabled Dec 12 18:45:14.888914 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 12 18:45:14.888929 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:45:14.888943 kernel: random: crng init done Dec 12 18:45:14.888956 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 18:45:14.888970 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 12 18:45:14.888986 kernel: Fallback order for Node 0: 0 Dec 12 18:45:14.889000 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Dec 12 18:45:14.889014 kernel: Policy zone: DMA32 Dec 12 18:45:14.890560 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 18:45:14.890591 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 12 18:45:14.890608 kernel: Kernel/User page tables isolation: enabled Dec 12 18:45:14.890625 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 18:45:14.890642 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 18:45:14.890658 kernel: Dynamic Preempt: voluntary Dec 12 18:45:14.890675 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 18:45:14.890693 kernel: rcu: RCU event tracing is enabled. Dec 12 18:45:14.890710 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 12 18:45:14.890730 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 18:45:14.890746 kernel: Rude variant of Tasks RCU enabled. Dec 12 18:45:14.890763 kernel: Tracing variant of Tasks RCU enabled. Dec 12 18:45:14.890779 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 18:45:14.890796 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 12 18:45:14.890815 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:45:14.890832 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:45:14.890848 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:45:14.890865 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 12 18:45:14.890881 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 18:45:14.890898 kernel: Console: colour dummy device 80x25 Dec 12 18:45:14.890915 kernel: printk: legacy console [tty0] enabled Dec 12 18:45:14.890931 kernel: printk: legacy console [ttyS0] enabled Dec 12 18:45:14.890950 kernel: ACPI: Core revision 20240827 Dec 12 18:45:14.890967 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 12 18:45:14.890984 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 18:45:14.891000 kernel: x2apic enabled Dec 12 18:45:14.891016 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 18:45:14.891032 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 12 18:45:14.891049 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 12 18:45:14.891065 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 12 18:45:14.891082 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Dec 12 18:45:14.891098 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 18:45:14.891117 kernel: Spectre V2 : Mitigation: Retpolines Dec 12 18:45:14.891133 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 12 18:45:14.891150 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 12 18:45:14.891166 kernel: RETBleed: Vulnerable Dec 12 18:45:14.891182 kernel: Speculative Store Bypass: Vulnerable Dec 12 18:45:14.891198 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 12 18:45:14.891214 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 12 18:45:14.891230 kernel: GDS: Unknown: Dependent on hypervisor status Dec 12 18:45:14.891246 kernel: active return thunk: its_return_thunk Dec 12 18:45:14.891262 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 12 18:45:14.891278 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 18:45:14.891298 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 18:45:14.891314 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 18:45:14.891331 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 12 18:45:14.891347 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 12 18:45:14.891363 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 12 18:45:14.891379 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 12 18:45:14.891395 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 12 18:45:14.891412 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 12 18:45:14.891428 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 18:45:14.891444 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 12 18:45:14.891463 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 12 18:45:14.891489 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 12 18:45:14.891505 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 12 18:45:14.892570 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 12 18:45:14.892589 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 12 18:45:14.892606 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 12 18:45:14.892622 kernel: Freeing SMP alternatives memory: 32K Dec 12 18:45:14.892639 kernel: pid_max: default: 32768 minimum: 301 Dec 12 18:45:14.892655 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 18:45:14.892671 kernel: landlock: Up and running. Dec 12 18:45:14.892687 kernel: SELinux: Initializing. Dec 12 18:45:14.892705 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 12 18:45:14.892726 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 12 18:45:14.892743 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 12 18:45:14.892759 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 12 18:45:14.892776 kernel: signal: max sigframe size: 3632 Dec 12 18:45:14.892792 kernel: rcu: Hierarchical SRCU implementation. Dec 12 18:45:14.892810 kernel: rcu: Max phase no-delay instances is 400. Dec 12 18:45:14.892826 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 18:45:14.892843 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 12 18:45:14.892860 kernel: smp: Bringing up secondary CPUs ... Dec 12 18:45:14.892876 kernel: smpboot: x86: Booting SMP configuration: Dec 12 18:45:14.892895 kernel: .... node #0, CPUs: #1 Dec 12 18:45:14.892913 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 12 18:45:14.892930 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 12 18:45:14.892947 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 18:45:14.892963 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 12 18:45:14.892980 kernel: Memory: 1899860K/2037804K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 133380K reserved, 0K cma-reserved) Dec 12 18:45:14.892997 kernel: devtmpfs: initialized Dec 12 18:45:14.893013 kernel: x86/mm: Memory block size: 128MB Dec 12 18:45:14.893032 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Dec 12 18:45:14.893049 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 18:45:14.893065 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 12 18:45:14.893081 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 18:45:14.893097 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 18:45:14.893114 kernel: audit: initializing netlink subsys (disabled) Dec 12 18:45:14.893130 kernel: audit: type=2000 audit(1765565113.389:1): state=initialized audit_enabled=0 res=1 Dec 12 18:45:14.893146 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 18:45:14.893162 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 18:45:14.893181 kernel: cpuidle: using governor menu Dec 12 18:45:14.893198 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 18:45:14.893214 kernel: dca service started, version 1.12.1 Dec 12 18:45:14.893230 kernel: PCI: Using configuration type 1 for base access Dec 12 18:45:14.893247 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 18:45:14.893263 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 18:45:14.893279 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 18:45:14.893296 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 18:45:14.893312 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 18:45:14.893331 kernel: ACPI: Added _OSI(Module Device) Dec 12 18:45:14.893347 kernel: ACPI: Added _OSI(Processor Device) Dec 12 18:45:14.893364 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 18:45:14.893380 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 12 18:45:14.893396 kernel: ACPI: Interpreter enabled Dec 12 18:45:14.893412 kernel: ACPI: PM: (supports S0 S5) Dec 12 18:45:14.893428 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 18:45:14.893445 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 18:45:14.893461 kernel: PCI: Using E820 reservations for host bridge windows Dec 12 18:45:14.893481 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 12 18:45:14.893498 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 18:45:14.894788 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 12 18:45:14.894943 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 12 18:45:14.896568 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 12 18:45:14.896597 kernel: acpiphp: Slot [3] registered Dec 12 18:45:14.896614 kernel: acpiphp: Slot [4] registered Dec 12 18:45:14.896634 kernel: acpiphp: Slot [5] registered Dec 12 18:45:14.896649 kernel: acpiphp: Slot [6] registered Dec 12 18:45:14.896664 kernel: acpiphp: Slot [7] registered Dec 12 18:45:14.896678 kernel: acpiphp: Slot [8] registered Dec 12 18:45:14.896693 kernel: acpiphp: Slot [9] registered Dec 12 18:45:14.896707 kernel: acpiphp: Slot [10] registered Dec 12 18:45:14.896722 kernel: acpiphp: Slot [11] registered Dec 12 18:45:14.896736 kernel: acpiphp: Slot [12] registered Dec 12 18:45:14.896750 kernel: acpiphp: Slot [13] registered Dec 12 18:45:14.896768 kernel: acpiphp: Slot [14] registered Dec 12 18:45:14.896782 kernel: acpiphp: Slot [15] registered Dec 12 18:45:14.896796 kernel: acpiphp: Slot [16] registered Dec 12 18:45:14.896810 kernel: acpiphp: Slot [17] registered Dec 12 18:45:14.896825 kernel: acpiphp: Slot [18] registered Dec 12 18:45:14.896857 kernel: acpiphp: Slot [19] registered Dec 12 18:45:14.896871 kernel: acpiphp: Slot [20] registered Dec 12 18:45:14.896884 kernel: acpiphp: Slot [21] registered Dec 12 18:45:14.896898 kernel: acpiphp: Slot [22] registered Dec 12 18:45:14.896914 kernel: acpiphp: Slot [23] registered Dec 12 18:45:14.896933 kernel: acpiphp: Slot [24] registered Dec 12 18:45:14.896948 kernel: acpiphp: Slot [25] registered Dec 12 18:45:14.896964 kernel: acpiphp: Slot [26] registered Dec 12 18:45:14.896979 kernel: acpiphp: Slot [27] registered Dec 12 18:45:14.896995 kernel: acpiphp: Slot [28] registered Dec 12 18:45:14.897010 kernel: acpiphp: Slot [29] registered Dec 12 18:45:14.897025 kernel: acpiphp: Slot [30] registered Dec 12 18:45:14.897040 kernel: acpiphp: Slot [31] registered Dec 12 18:45:14.897056 kernel: PCI host bridge to bus 0000:00 Dec 12 18:45:14.897214 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 12 18:45:14.897334 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 12 18:45:14.897450 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 12 18:45:14.897581 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 12 18:45:14.897696 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Dec 12 18:45:14.897825 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 18:45:14.897982 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Dec 12 18:45:14.898126 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Dec 12 18:45:14.898266 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Dec 12 18:45:14.898394 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 12 18:45:14.900595 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 12 18:45:14.900781 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 12 18:45:14.900921 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 12 18:45:14.901062 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 12 18:45:14.901193 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 12 18:45:14.901322 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 12 18:45:14.901472 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Dec 12 18:45:14.901909 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Dec 12 18:45:14.902077 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 12 18:45:14.902218 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 12 18:45:14.902371 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Dec 12 18:45:14.903993 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Dec 12 18:45:14.904191 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Dec 12 18:45:14.904328 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Dec 12 18:45:14.904347 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 12 18:45:14.904363 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 12 18:45:14.904378 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 12 18:45:14.904397 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 12 18:45:14.904411 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 12 18:45:14.904426 kernel: iommu: Default domain type: Translated Dec 12 18:45:14.904441 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 18:45:14.904456 kernel: efivars: Registered efivars operations Dec 12 18:45:14.904470 kernel: PCI: Using ACPI for IRQ routing Dec 12 18:45:14.904485 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 12 18:45:14.904499 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Dec 12 18:45:14.904528 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Dec 12 18:45:14.904546 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Dec 12 18:45:14.904678 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 12 18:45:14.904807 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 12 18:45:14.904937 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 12 18:45:14.904955 kernel: vgaarb: loaded Dec 12 18:45:14.904970 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 12 18:45:14.904985 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 12 18:45:14.904999 kernel: clocksource: Switched to clocksource kvm-clock Dec 12 18:45:14.905017 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 18:45:14.905032 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 18:45:14.905046 kernel: pnp: PnP ACPI init Dec 12 18:45:14.905061 kernel: pnp: PnP ACPI: found 5 devices Dec 12 18:45:14.905075 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 18:45:14.905090 kernel: NET: Registered PF_INET protocol family Dec 12 18:45:14.905105 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 18:45:14.905119 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 12 18:45:14.905134 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 18:45:14.905152 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 12 18:45:14.905166 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 12 18:45:14.905180 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 12 18:45:14.905195 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 12 18:45:14.905209 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 12 18:45:14.905223 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 18:45:14.905238 kernel: NET: Registered PF_XDP protocol family Dec 12 18:45:14.905358 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 12 18:45:14.905489 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 12 18:45:14.905619 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 12 18:45:14.905737 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 12 18:45:14.905854 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Dec 12 18:45:14.905988 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 12 18:45:14.906007 kernel: PCI: CLS 0 bytes, default 64 Dec 12 18:45:14.906022 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 12 18:45:14.906037 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 12 18:45:14.906052 kernel: clocksource: Switched to clocksource tsc Dec 12 18:45:14.906070 kernel: Initialise system trusted keyrings Dec 12 18:45:14.906084 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 12 18:45:14.906099 kernel: Key type asymmetric registered Dec 12 18:45:14.906113 kernel: Asymmetric key parser 'x509' registered Dec 12 18:45:14.906127 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 18:45:14.906142 kernel: io scheduler mq-deadline registered Dec 12 18:45:14.906157 kernel: io scheduler kyber registered Dec 12 18:45:14.906171 kernel: io scheduler bfq registered Dec 12 18:45:14.906185 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 18:45:14.906202 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 18:45:14.906217 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:45:14.906232 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 12 18:45:14.906247 kernel: i8042: Warning: Keylock active Dec 12 18:45:14.906261 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 12 18:45:14.906275 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 12 18:45:14.906718 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 12 18:45:14.906858 kernel: rtc_cmos 00:00: registered as rtc0 Dec 12 18:45:14.906992 kernel: rtc_cmos 00:00: setting system clock to 2025-12-12T18:45:14 UTC (1765565114) Dec 12 18:45:14.907119 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 12 18:45:14.907162 kernel: intel_pstate: CPU model not supported Dec 12 18:45:14.907183 kernel: efifb: probing for efifb Dec 12 18:45:14.907201 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Dec 12 18:45:14.907220 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Dec 12 18:45:14.907238 kernel: efifb: scrolling: redraw Dec 12 18:45:14.907256 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 12 18:45:14.907274 kernel: Console: switching to colour frame buffer device 100x37 Dec 12 18:45:14.907294 kernel: fb0: EFI VGA frame buffer device Dec 12 18:45:14.907313 kernel: pstore: Using crash dump compression: deflate Dec 12 18:45:14.907330 kernel: pstore: Registered efi_pstore as persistent store backend Dec 12 18:45:14.907348 kernel: NET: Registered PF_INET6 protocol family Dec 12 18:45:14.907366 kernel: Segment Routing with IPv6 Dec 12 18:45:14.907384 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 18:45:14.907402 kernel: NET: Registered PF_PACKET protocol family Dec 12 18:45:14.907420 kernel: Key type dns_resolver registered Dec 12 18:45:14.907437 kernel: IPI shorthand broadcast: enabled Dec 12 18:45:14.907458 kernel: sched_clock: Marking stable (2595001783, 149110052)->(2826856565, -82744730) Dec 12 18:45:14.907487 kernel: registered taskstats version 1 Dec 12 18:45:14.907505 kernel: Loading compiled-in X.509 certificates Dec 12 18:45:14.908571 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 12 18:45:14.908587 kernel: Demotion targets for Node 0: null Dec 12 18:45:14.908604 kernel: Key type .fscrypt registered Dec 12 18:45:14.908621 kernel: Key type fscrypt-provisioning registered Dec 12 18:45:14.908644 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 18:45:14.908661 kernel: ima: Allocated hash algorithm: sha1 Dec 12 18:45:14.908681 kernel: ima: No architecture policies found Dec 12 18:45:14.908699 kernel: clk: Disabling unused clocks Dec 12 18:45:14.908716 kernel: Warning: unable to open an initial console. Dec 12 18:45:14.908733 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 12 18:45:14.908751 kernel: Write protecting the kernel read-only data: 40960k Dec 12 18:45:14.908771 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 12 18:45:14.908789 kernel: Run /init as init process Dec 12 18:45:14.908804 kernel: with arguments: Dec 12 18:45:14.908820 kernel: /init Dec 12 18:45:14.908837 kernel: with environment: Dec 12 18:45:14.908853 kernel: HOME=/ Dec 12 18:45:14.908869 kernel: TERM=linux Dec 12 18:45:14.908887 systemd[1]: Successfully made /usr/ read-only. Dec 12 18:45:14.908909 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:45:14.908929 systemd[1]: Detected virtualization amazon. Dec 12 18:45:14.908946 systemd[1]: Detected architecture x86-64. Dec 12 18:45:14.908962 systemd[1]: Running in initrd. Dec 12 18:45:14.908979 systemd[1]: No hostname configured, using default hostname. Dec 12 18:45:14.908997 systemd[1]: Hostname set to . Dec 12 18:45:14.909014 systemd[1]: Initializing machine ID from VM UUID. Dec 12 18:45:14.909031 systemd[1]: Queued start job for default target initrd.target. Dec 12 18:45:14.909051 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:45:14.909068 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:45:14.909087 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 18:45:14.909105 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:45:14.909122 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 18:45:14.909140 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 18:45:14.909159 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 18:45:14.909180 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 18:45:14.909198 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:45:14.909215 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:45:14.909233 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:45:14.909250 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:45:14.909267 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:45:14.909285 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:45:14.909302 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:45:14.909319 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:45:14.909339 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 18:45:14.909356 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 18:45:14.909374 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:45:14.909391 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:45:14.909408 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:45:14.909425 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:45:14.909443 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 18:45:14.909460 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:45:14.909479 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 18:45:14.909497 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 18:45:14.909529 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 18:45:14.909546 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:45:14.909562 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:45:14.909579 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:45:14.909594 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 18:45:14.909650 systemd-journald[187]: Collecting audit messages is disabled. Dec 12 18:45:14.909702 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:45:14.909718 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 18:45:14.909741 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:45:14.909761 systemd-journald[187]: Journal started Dec 12 18:45:14.909796 systemd-journald[187]: Runtime Journal (/run/log/journal/ec2593bfef447864f111005356486736) is 4.7M, max 38.1M, 33.3M free. Dec 12 18:45:14.907943 systemd-modules-load[189]: Inserted module 'overlay' Dec 12 18:45:14.918537 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:45:14.929206 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:45:14.937130 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:45:14.942137 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 18:45:14.946671 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:45:14.957691 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:45:14.961234 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 18:45:14.964768 kernel: Bridge firewalling registered Dec 12 18:45:14.966551 systemd-modules-load[189]: Inserted module 'br_netfilter' Dec 12 18:45:14.967934 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:45:14.971270 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:45:14.978259 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:45:14.984316 systemd-tmpfiles[207]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 18:45:14.991179 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:45:14.995745 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 18:45:15.000955 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:45:15.004380 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 12 18:45:15.013933 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:45:15.017799 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:45:15.026440 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:45:15.078036 systemd-resolved[234]: Positive Trust Anchors: Dec 12 18:45:15.078640 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:45:15.078711 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:45:15.087775 systemd-resolved[234]: Defaulting to hostname 'linux'. Dec 12 18:45:15.089210 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:45:15.090548 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:45:15.130557 kernel: SCSI subsystem initialized Dec 12 18:45:15.140543 kernel: Loading iSCSI transport class v2.0-870. Dec 12 18:45:15.153549 kernel: iscsi: registered transport (tcp) Dec 12 18:45:15.175883 kernel: iscsi: registered transport (qla4xxx) Dec 12 18:45:15.175964 kernel: QLogic iSCSI HBA Driver Dec 12 18:45:15.195620 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:45:15.216084 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:45:15.217182 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:45:15.265364 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 18:45:15.267430 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 18:45:15.332544 kernel: raid6: avx512x4 gen() 17868 MB/s Dec 12 18:45:15.350537 kernel: raid6: avx512x2 gen() 18068 MB/s Dec 12 18:45:15.368540 kernel: raid6: avx512x1 gen() 17815 MB/s Dec 12 18:45:15.386537 kernel: raid6: avx2x4 gen() 17879 MB/s Dec 12 18:45:15.404539 kernel: raid6: avx2x2 gen() 17948 MB/s Dec 12 18:45:15.422820 kernel: raid6: avx2x1 gen() 13652 MB/s Dec 12 18:45:15.422890 kernel: raid6: using algorithm avx512x2 gen() 18068 MB/s Dec 12 18:45:15.441807 kernel: raid6: .... xor() 24123 MB/s, rmw enabled Dec 12 18:45:15.441879 kernel: raid6: using avx512x2 recovery algorithm Dec 12 18:45:15.463609 kernel: xor: automatically using best checksumming function avx Dec 12 18:45:15.635578 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 18:45:15.642735 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:45:15.645152 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:45:15.676148 systemd-udevd[437]: Using default interface naming scheme 'v255'. Dec 12 18:45:15.682780 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:45:15.685734 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 18:45:15.708672 dracut-pre-trigger[442]: rd.md=0: removing MD RAID activation Dec 12 18:45:15.735686 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:45:15.737720 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:45:15.801882 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:45:15.806801 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 18:45:15.869181 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 18:45:15.869246 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 12 18:45:15.869463 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 12 18:45:15.876550 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 12 18:45:15.882532 kernel: AES CTR mode by8 optimization enabled Dec 12 18:45:15.892609 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:18:2d:7a:09:8f Dec 12 18:45:15.896540 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:45:15.896658 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:45:15.897973 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:45:15.900600 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:45:15.901877 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:45:15.904593 (udev-worker)[498]: Network interface NamePolicy= disabled on kernel command line. Dec 12 18:45:15.932625 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Dec 12 18:45:15.937982 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:45:15.939290 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:45:15.946613 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 12 18:45:15.946863 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 12 18:45:15.947827 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:45:15.955661 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:45:15.957871 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 12 18:45:15.963652 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 18:45:15.963722 kernel: GPT:9289727 != 33554431 Dec 12 18:45:15.963741 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 18:45:15.963758 kernel: GPT:9289727 != 33554431 Dec 12 18:45:15.963774 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 18:45:15.963792 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 12 18:45:15.988281 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:45:16.007702 kernel: nvme nvme0: using unchecked data buffer Dec 12 18:45:16.079309 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 12 18:45:16.125659 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 12 18:45:16.126404 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 18:45:16.137790 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 12 18:45:16.147760 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 12 18:45:16.148356 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 12 18:45:16.149630 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:45:16.150421 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:45:16.151446 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:45:16.153076 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 18:45:16.154885 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 18:45:16.173091 disk-uuid[675]: Primary Header is updated. Dec 12 18:45:16.173091 disk-uuid[675]: Secondary Entries is updated. Dec 12 18:45:16.173091 disk-uuid[675]: Secondary Header is updated. Dec 12 18:45:16.178873 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 12 18:45:16.182932 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:45:16.194682 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 12 18:45:17.204573 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 12 18:45:17.204733 disk-uuid[678]: The operation has completed successfully. Dec 12 18:45:17.322751 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 18:45:17.322900 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 18:45:17.352217 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 18:45:17.366407 sh[941]: Success Dec 12 18:45:17.386714 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 18:45:17.386791 kernel: device-mapper: uevent: version 1.0.3 Dec 12 18:45:17.390836 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 18:45:17.400573 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Dec 12 18:45:17.510620 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:45:17.513481 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 18:45:17.522628 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 18:45:17.542556 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (964) Dec 12 18:45:17.546455 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 12 18:45:17.546550 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:45:17.621536 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 12 18:45:17.621607 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 18:45:17.621620 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 18:45:17.625616 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 18:45:17.627208 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:45:17.627901 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 18:45:17.628681 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 18:45:17.631345 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 18:45:17.670839 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (997) Dec 12 18:45:17.670924 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:45:17.673418 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:45:17.691498 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 12 18:45:17.691618 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 12 18:45:17.699597 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:45:17.700648 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 18:45:17.704679 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 18:45:17.741154 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:45:17.744351 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:45:17.797497 systemd-networkd[1133]: lo: Link UP Dec 12 18:45:17.798430 systemd-networkd[1133]: lo: Gained carrier Dec 12 18:45:17.801571 systemd-networkd[1133]: Enumeration completed Dec 12 18:45:17.801992 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:45:17.802478 systemd-networkd[1133]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:45:17.802484 systemd-networkd[1133]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:45:17.804165 systemd[1]: Reached target network.target - Network. Dec 12 18:45:17.807388 systemd-networkd[1133]: eth0: Link UP Dec 12 18:45:17.807393 systemd-networkd[1133]: eth0: Gained carrier Dec 12 18:45:17.807411 systemd-networkd[1133]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:45:17.821605 systemd-networkd[1133]: eth0: DHCPv4 address 172.31.25.153/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 12 18:45:18.068932 ignition[1087]: Ignition 2.22.0 Dec 12 18:45:18.068955 ignition[1087]: Stage: fetch-offline Dec 12 18:45:18.069127 ignition[1087]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:45:18.069135 ignition[1087]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 18:45:18.069333 ignition[1087]: Ignition finished successfully Dec 12 18:45:18.071355 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:45:18.073292 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 18:45:18.108152 ignition[1142]: Ignition 2.22.0 Dec 12 18:45:18.108171 ignition[1142]: Stage: fetch Dec 12 18:45:18.108588 ignition[1142]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:45:18.108601 ignition[1142]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 18:45:18.108729 ignition[1142]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 18:45:18.126364 ignition[1142]: PUT result: OK Dec 12 18:45:18.129504 ignition[1142]: parsed url from cmdline: "" Dec 12 18:45:18.129528 ignition[1142]: no config URL provided Dec 12 18:45:18.129538 ignition[1142]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:45:18.129554 ignition[1142]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:45:18.129575 ignition[1142]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 18:45:18.130303 ignition[1142]: PUT result: OK Dec 12 18:45:18.130359 ignition[1142]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 12 18:45:18.131445 ignition[1142]: GET result: OK Dec 12 18:45:18.131646 ignition[1142]: parsing config with SHA512: 126faeb36c320fe0a1735b09648b55b6166a40f6f080d242e75402487277e46cb8210a151a9dd2d4d30ce6a82354da44d1e71123646867684a798d4b284fa194 Dec 12 18:45:18.138818 unknown[1142]: fetched base config from "system" Dec 12 18:45:18.139400 ignition[1142]: fetch: fetch complete Dec 12 18:45:18.138834 unknown[1142]: fetched base config from "system" Dec 12 18:45:18.139405 ignition[1142]: fetch: fetch passed Dec 12 18:45:18.138845 unknown[1142]: fetched user config from "aws" Dec 12 18:45:18.139453 ignition[1142]: Ignition finished successfully Dec 12 18:45:18.142254 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 18:45:18.144487 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 18:45:18.187427 ignition[1148]: Ignition 2.22.0 Dec 12 18:45:18.187443 ignition[1148]: Stage: kargs Dec 12 18:45:18.188046 ignition[1148]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:45:18.188059 ignition[1148]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 18:45:18.188180 ignition[1148]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 18:45:18.189084 ignition[1148]: PUT result: OK Dec 12 18:45:18.191976 ignition[1148]: kargs: kargs passed Dec 12 18:45:18.192069 ignition[1148]: Ignition finished successfully Dec 12 18:45:18.193976 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 18:45:18.196075 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 18:45:18.229630 ignition[1155]: Ignition 2.22.0 Dec 12 18:45:18.229645 ignition[1155]: Stage: disks Dec 12 18:45:18.230027 ignition[1155]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:45:18.230039 ignition[1155]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 18:45:18.230140 ignition[1155]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 18:45:18.234801 ignition[1155]: PUT result: OK Dec 12 18:45:18.237486 ignition[1155]: disks: disks passed Dec 12 18:45:18.237561 ignition[1155]: Ignition finished successfully Dec 12 18:45:18.239264 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 18:45:18.240278 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 18:45:18.240946 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 18:45:18.241275 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:45:18.241809 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:45:18.242367 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:45:18.244191 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 18:45:18.293693 systemd-fsck[1164]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 18:45:18.296633 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 18:45:18.298370 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 18:45:18.443538 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 12 18:45:18.444710 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 18:45:18.445640 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 18:45:18.447920 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:45:18.450307 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 18:45:18.453146 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 18:45:18.453216 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 18:45:18.453251 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:45:18.463945 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 18:45:18.466079 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 18:45:18.478546 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1183) Dec 12 18:45:18.485591 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:45:18.485671 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:45:18.493060 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 12 18:45:18.493143 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 12 18:45:18.495749 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:45:18.724031 initrd-setup-root[1208]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 18:45:18.729817 initrd-setup-root[1215]: cut: /sysroot/etc/group: No such file or directory Dec 12 18:45:18.734709 initrd-setup-root[1222]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 18:45:18.738766 initrd-setup-root[1229]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 18:45:18.902644 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 18:45:18.904799 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 18:45:18.906700 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 18:45:18.928205 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 18:45:18.930103 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:45:18.956343 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 18:45:18.966640 ignition[1296]: INFO : Ignition 2.22.0 Dec 12 18:45:18.967683 ignition[1296]: INFO : Stage: mount Dec 12 18:45:18.968523 ignition[1296]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:45:18.968523 ignition[1296]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 18:45:18.969704 ignition[1296]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 18:45:18.969704 ignition[1296]: INFO : PUT result: OK Dec 12 18:45:18.972198 ignition[1296]: INFO : mount: mount passed Dec 12 18:45:18.973587 ignition[1296]: INFO : Ignition finished successfully Dec 12 18:45:18.974331 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 18:45:18.976323 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 18:45:18.998419 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:45:19.028090 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1308) Dec 12 18:45:19.028154 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:45:19.031455 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:45:19.038389 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 12 18:45:19.038479 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 12 18:45:19.040549 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:45:19.095558 ignition[1325]: INFO : Ignition 2.22.0 Dec 12 18:45:19.095558 ignition[1325]: INFO : Stage: files Dec 12 18:45:19.097126 ignition[1325]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:45:19.097126 ignition[1325]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 18:45:19.097126 ignition[1325]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 18:45:19.097126 ignition[1325]: INFO : PUT result: OK Dec 12 18:45:19.103974 ignition[1325]: DEBUG : files: compiled without relabeling support, skipping Dec 12 18:45:19.107051 ignition[1325]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 18:45:19.107051 ignition[1325]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 18:45:19.111307 ignition[1325]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 18:45:19.112952 ignition[1325]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 18:45:19.112952 ignition[1325]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 18:45:19.112386 unknown[1325]: wrote ssh authorized keys file for user: core Dec 12 18:45:19.117129 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 18:45:19.117129 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 12 18:45:19.229402 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 18:45:19.400143 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 18:45:19.403659 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 18:45:19.403659 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 18:45:19.403659 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:45:19.403659 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:45:19.403659 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:45:19.403659 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:45:19.403659 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:45:19.403659 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:45:19.420711 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:45:19.420711 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:45:19.420711 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:45:19.420711 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:45:19.420711 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:45:19.420711 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 12 18:45:19.535951 systemd-networkd[1133]: eth0: Gained IPv6LL Dec 12 18:45:20.181964 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 18:45:20.520467 ignition[1325]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:45:20.520467 ignition[1325]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 18:45:20.524101 ignition[1325]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:45:20.528713 ignition[1325]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:45:20.528713 ignition[1325]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 18:45:20.528713 ignition[1325]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 12 18:45:20.532374 ignition[1325]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 18:45:20.532374 ignition[1325]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:45:20.532374 ignition[1325]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:45:20.532374 ignition[1325]: INFO : files: files passed Dec 12 18:45:20.532374 ignition[1325]: INFO : Ignition finished successfully Dec 12 18:45:20.530914 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 18:45:20.533005 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 18:45:20.539825 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 18:45:20.549899 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 18:45:20.550045 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 18:45:20.558486 initrd-setup-root-after-ignition[1356]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:45:20.558486 initrd-setup-root-after-ignition[1356]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:45:20.562691 initrd-setup-root-after-ignition[1360]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:45:20.563664 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:45:20.564804 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 18:45:20.566769 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 18:45:20.621607 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 18:45:20.621779 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 18:45:20.623059 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 18:45:20.624367 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 18:45:20.625273 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 18:45:20.626489 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 18:45:20.668468 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:45:20.670853 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 18:45:20.698349 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:45:20.699119 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:45:20.700343 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 18:45:20.701294 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 18:45:20.701553 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:45:20.702707 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 18:45:20.703782 systemd[1]: Stopped target basic.target - Basic System. Dec 12 18:45:20.704484 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 18:45:20.705321 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:45:20.706102 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 18:45:20.706904 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:45:20.707852 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 18:45:20.708688 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:45:20.709507 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 18:45:20.710636 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 18:45:20.711407 systemd[1]: Stopped target swap.target - Swaps. Dec 12 18:45:20.712296 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 18:45:20.712486 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:45:20.713625 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:45:20.714455 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:45:20.715134 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 18:45:20.715997 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:45:20.716614 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 18:45:20.716794 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 18:45:20.718144 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 18:45:20.718387 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:45:20.719102 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 18:45:20.719300 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 18:45:20.721619 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 18:45:20.724411 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 18:45:20.726602 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 18:45:20.726825 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:45:20.728897 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 18:45:20.729114 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:45:20.738022 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 18:45:20.738168 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 18:45:20.764041 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 18:45:20.765618 ignition[1380]: INFO : Ignition 2.22.0 Dec 12 18:45:20.765618 ignition[1380]: INFO : Stage: umount Dec 12 18:45:20.767133 ignition[1380]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:45:20.767133 ignition[1380]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 18:45:20.767133 ignition[1380]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 18:45:20.767133 ignition[1380]: INFO : PUT result: OK Dec 12 18:45:20.773925 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 18:45:20.774785 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 18:45:20.776252 ignition[1380]: INFO : umount: umount passed Dec 12 18:45:20.776252 ignition[1380]: INFO : Ignition finished successfully Dec 12 18:45:20.777373 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 18:45:20.777563 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 18:45:20.778836 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 18:45:20.778953 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 18:45:20.779847 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 18:45:20.779914 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 18:45:20.780531 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 18:45:20.780594 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 18:45:20.781190 systemd[1]: Stopped target network.target - Network. Dec 12 18:45:20.781860 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 18:45:20.781927 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:45:20.782586 systemd[1]: Stopped target paths.target - Path Units. Dec 12 18:45:20.783160 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 18:45:20.783235 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:45:20.784253 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 18:45:20.784887 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 18:45:20.785550 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 18:45:20.785611 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:45:20.786204 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 18:45:20.786255 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:45:20.786846 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 18:45:20.786926 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 18:45:20.787621 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 18:45:20.787684 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 18:45:20.788683 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 18:45:20.788755 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 18:45:20.789545 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 18:45:20.790212 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 18:45:20.796724 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 18:45:20.796862 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 18:45:20.801439 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 18:45:20.801903 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 18:45:20.801969 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:45:20.804763 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:45:20.805091 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 18:45:20.805250 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 18:45:20.807391 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 18:45:20.808198 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 18:45:20.808959 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 18:45:20.809018 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:45:20.810776 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 18:45:20.811305 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 18:45:20.811381 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:45:20.812190 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:45:20.812250 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:45:20.817665 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 18:45:20.817753 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 18:45:20.818958 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:45:20.823046 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 18:45:20.833980 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 18:45:20.834764 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:45:20.837486 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 18:45:20.837907 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 18:45:20.838604 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 18:45:20.838665 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:45:20.840160 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 18:45:20.840240 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:45:20.841449 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 18:45:20.841534 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 18:45:20.842775 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 18:45:20.842853 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:45:20.845499 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 18:45:20.849657 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 18:45:20.849765 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:45:20.852319 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 18:45:20.852404 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:45:20.853020 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 12 18:45:20.853088 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:45:20.853805 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 18:45:20.853866 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:45:20.854920 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:45:20.854978 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:45:20.858497 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 18:45:20.858645 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 18:45:20.865725 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 18:45:20.865861 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 18:45:20.867637 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 18:45:20.869411 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 18:45:20.896838 systemd[1]: Switching root. Dec 12 18:45:20.939285 systemd-journald[187]: Journal stopped Dec 12 18:45:22.695238 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Dec 12 18:45:22.695340 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 18:45:22.695363 kernel: SELinux: policy capability open_perms=1 Dec 12 18:45:22.695389 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 18:45:22.695413 kernel: SELinux: policy capability always_check_network=0 Dec 12 18:45:22.695434 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 18:45:22.696845 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 18:45:22.696879 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 18:45:22.696910 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 18:45:22.696929 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 18:45:22.696950 kernel: audit: type=1403 audit(1765565121.313:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 18:45:22.696973 systemd[1]: Successfully loaded SELinux policy in 77.914ms. Dec 12 18:45:22.697009 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.414ms. Dec 12 18:45:22.697033 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:45:22.697055 systemd[1]: Detected virtualization amazon. Dec 12 18:45:22.697076 systemd[1]: Detected architecture x86-64. Dec 12 18:45:22.697097 systemd[1]: Detected first boot. Dec 12 18:45:22.697122 systemd[1]: Initializing machine ID from VM UUID. Dec 12 18:45:22.697142 kernel: Guest personality initialized and is inactive Dec 12 18:45:22.697162 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 12 18:45:22.697181 kernel: Initialized host personality Dec 12 18:45:22.697201 zram_generator::config[1424]: No configuration found. Dec 12 18:45:22.697230 kernel: NET: Registered PF_VSOCK protocol family Dec 12 18:45:22.697249 systemd[1]: Populated /etc with preset unit settings. Dec 12 18:45:22.697278 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 18:45:22.697298 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 18:45:22.697322 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 18:45:22.697345 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 18:45:22.697367 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 18:45:22.697388 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 18:45:22.697409 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 18:45:22.697431 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 18:45:22.697452 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 18:45:22.697472 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 18:45:22.697495 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 18:45:22.697555 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 18:45:22.697577 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:45:22.697596 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:45:22.697616 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 18:45:22.697636 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 18:45:22.697657 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 18:45:22.697681 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:45:22.697700 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 18:45:22.697718 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:45:22.697735 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:45:22.697755 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 18:45:22.697774 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 18:45:22.697794 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 18:45:22.697814 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 18:45:22.697833 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:45:22.697851 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:45:22.697875 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:45:22.699551 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:45:22.699600 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 18:45:22.699622 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 18:45:22.699644 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 18:45:22.699664 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:45:22.699684 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:45:22.699702 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:45:22.699722 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 18:45:22.699748 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 18:45:22.699766 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 18:45:22.699788 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 18:45:22.699811 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:45:22.699833 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 18:45:22.699854 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 18:45:22.699876 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 18:45:22.699900 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 18:45:22.699923 systemd[1]: Reached target machines.target - Containers. Dec 12 18:45:22.699945 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 18:45:22.699964 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:45:22.699984 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:45:22.700010 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 18:45:22.700036 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:45:22.700054 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:45:22.700074 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:45:22.700094 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 18:45:22.700117 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:45:22.700139 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 18:45:22.700161 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 18:45:22.700182 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 18:45:22.700204 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 18:45:22.700225 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 18:45:22.700249 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:45:22.700271 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:45:22.700296 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:45:22.700318 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:45:22.700339 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 18:45:22.700362 kernel: loop: module loaded Dec 12 18:45:22.700383 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 18:45:22.700405 kernel: fuse: init (API version 7.41) Dec 12 18:45:22.700431 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:45:22.700453 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 18:45:22.700475 systemd[1]: Stopped verity-setup.service. Dec 12 18:45:22.700497 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:45:22.701615 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 18:45:22.701649 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 18:45:22.701670 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 18:45:22.701690 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 18:45:22.701710 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 18:45:22.701730 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 18:45:22.701749 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:45:22.701769 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 18:45:22.701788 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 18:45:22.701812 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:45:22.701832 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:45:22.701851 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:45:22.701872 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:45:22.701892 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 18:45:22.701911 kernel: ACPI: bus type drm_connector registered Dec 12 18:45:22.701932 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 18:45:22.701952 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:45:22.701972 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:45:22.701995 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:45:22.702015 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:45:22.702034 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:45:22.702054 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:45:22.702074 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 18:45:22.702094 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 18:45:22.702117 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:45:22.702139 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 18:45:22.702202 systemd-journald[1503]: Collecting audit messages is disabled. Dec 12 18:45:22.702239 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 18:45:22.702259 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 18:45:22.702279 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:45:22.702299 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 18:45:22.702321 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 18:45:22.702341 systemd-journald[1503]: Journal started Dec 12 18:45:22.702379 systemd-journald[1503]: Runtime Journal (/run/log/journal/ec2593bfef447864f111005356486736) is 4.7M, max 38.1M, 33.3M free. Dec 12 18:45:22.205868 systemd[1]: Queued start job for default target multi-user.target. Dec 12 18:45:22.703795 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:45:22.221133 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 12 18:45:22.221684 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 18:45:22.714752 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 18:45:22.714851 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:45:22.724545 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 18:45:22.728564 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:45:22.734121 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:45:22.752661 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 18:45:22.752739 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:45:22.767588 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:45:22.763559 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 18:45:22.765837 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 18:45:22.766925 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 18:45:22.772627 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 18:45:22.810756 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 18:45:22.815868 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 18:45:22.823634 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 18:45:22.864162 kernel: loop0: detected capacity change from 0 to 72368 Dec 12 18:45:22.869408 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:45:22.891778 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:45:22.907368 systemd-journald[1503]: Time spent on flushing to /var/log/journal/ec2593bfef447864f111005356486736 is 58.184ms for 1025 entries. Dec 12 18:45:22.907368 systemd-journald[1503]: System Journal (/var/log/journal/ec2593bfef447864f111005356486736) is 8M, max 195.6M, 187.6M free. Dec 12 18:45:22.992252 systemd-journald[1503]: Received client request to flush runtime journal. Dec 12 18:45:22.992336 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 18:45:22.996977 kernel: loop1: detected capacity change from 0 to 229808 Dec 12 18:45:22.910586 systemd-tmpfiles[1540]: ACLs are not supported, ignoring. Dec 12 18:45:22.910610 systemd-tmpfiles[1540]: ACLs are not supported, ignoring. Dec 12 18:45:22.911586 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 18:45:22.937644 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:45:22.944719 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 18:45:22.997367 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 18:45:23.043419 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 18:45:23.046788 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:45:23.077614 systemd-tmpfiles[1578]: ACLs are not supported, ignoring. Dec 12 18:45:23.077648 systemd-tmpfiles[1578]: ACLs are not supported, ignoring. Dec 12 18:45:23.084053 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:45:23.120852 kernel: loop2: detected capacity change from 0 to 128560 Dec 12 18:45:23.226994 kernel: loop3: detected capacity change from 0 to 110984 Dec 12 18:45:23.223585 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 18:45:23.340546 kernel: loop4: detected capacity change from 0 to 72368 Dec 12 18:45:23.381560 kernel: loop5: detected capacity change from 0 to 229808 Dec 12 18:45:23.410787 kernel: loop6: detected capacity change from 0 to 128560 Dec 12 18:45:23.429554 kernel: loop7: detected capacity change from 0 to 110984 Dec 12 18:45:23.466161 (sd-merge)[1584]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 12 18:45:23.468644 (sd-merge)[1584]: Merged extensions into '/usr'. Dec 12 18:45:23.474243 systemd[1]: Reload requested from client PID 1539 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 18:45:23.474413 systemd[1]: Reloading... Dec 12 18:45:23.609604 zram_generator::config[1609]: No configuration found. Dec 12 18:45:23.922614 systemd[1]: Reloading finished in 447 ms. Dec 12 18:45:23.940274 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 18:45:23.941172 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 18:45:23.954840 systemd[1]: Starting ensure-sysext.service... Dec 12 18:45:23.958997 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:45:23.964299 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:45:23.991783 systemd-tmpfiles[1664]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 18:45:23.994034 systemd-tmpfiles[1664]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 18:45:23.994449 systemd-tmpfiles[1664]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 18:45:23.995430 systemd-tmpfiles[1664]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 18:45:23.999146 systemd-tmpfiles[1664]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 18:45:24.000008 systemd-tmpfiles[1664]: ACLs are not supported, ignoring. Dec 12 18:45:24.001206 systemd-tmpfiles[1664]: ACLs are not supported, ignoring. Dec 12 18:45:24.002637 systemd[1]: Reload requested from client PID 1663 ('systemctl') (unit ensure-sysext.service)... Dec 12 18:45:24.002653 systemd[1]: Reloading... Dec 12 18:45:24.021668 systemd-tmpfiles[1664]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:45:24.021837 systemd-tmpfiles[1664]: Skipping /boot Dec 12 18:45:24.033306 systemd-udevd[1665]: Using default interface naming scheme 'v255'. Dec 12 18:45:24.037881 systemd-tmpfiles[1664]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:45:24.038024 systemd-tmpfiles[1664]: Skipping /boot Dec 12 18:45:24.124543 zram_generator::config[1689]: No configuration found. Dec 12 18:45:24.165247 ldconfig[1532]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 18:45:24.422412 (udev-worker)[1733]: Network interface NamePolicy= disabled on kernel command line. Dec 12 18:45:24.499533 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 18:45:24.513544 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 12 18:45:24.522554 kernel: ACPI: button: Power Button [PWRF] Dec 12 18:45:24.532542 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 12 18:45:24.545552 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Dec 12 18:45:24.553540 kernel: ACPI: button: Sleep Button [SLPF] Dec 12 18:45:24.634792 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 18:45:24.635433 systemd[1]: Reloading finished in 632 ms. Dec 12 18:45:24.646451 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:45:24.649632 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 18:45:24.651789 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:45:24.688759 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:45:24.695788 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 18:45:24.699376 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 18:45:24.704403 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:45:24.720624 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:45:24.726627 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 18:45:24.736092 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:45:24.736389 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:45:24.739725 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:45:24.742127 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:45:24.745586 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:45:24.746473 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:45:24.746833 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:45:24.746977 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:45:24.753336 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 18:45:24.756944 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:45:24.757387 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:45:24.758016 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:45:24.758175 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:45:24.758318 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:45:24.769836 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:45:24.770231 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:45:24.774625 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:45:24.775407 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:45:24.775608 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:45:24.775867 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 18:45:24.777705 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:45:24.789204 systemd[1]: Finished ensure-sysext.service. Dec 12 18:45:24.811506 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 18:45:24.827970 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:45:24.828225 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:45:24.838135 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 18:45:24.843495 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 18:45:24.858355 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:45:24.858687 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:45:24.859654 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:45:24.879854 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:45:24.882267 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:45:24.884872 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:45:24.885126 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:45:24.886892 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:45:24.931603 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 18:45:24.932907 augenrules[1909]: No rules Dec 12 18:45:24.934532 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 18:45:24.937018 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:45:24.938569 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:45:24.953994 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:45:25.015090 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 12 18:45:25.023635 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 18:45:25.031642 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:45:25.068829 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 18:45:25.088214 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:45:25.089083 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:45:25.092961 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:45:25.097739 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:45:25.109318 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 18:45:25.255007 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:45:25.276893 systemd-networkd[1833]: lo: Link UP Dec 12 18:45:25.277271 systemd-networkd[1833]: lo: Gained carrier Dec 12 18:45:25.279228 systemd-networkd[1833]: Enumeration completed Dec 12 18:45:25.279632 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:45:25.281767 systemd-networkd[1833]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:45:25.281904 systemd-networkd[1833]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:45:25.283682 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 18:45:25.285264 systemd-networkd[1833]: eth0: Link UP Dec 12 18:45:25.285581 systemd-networkd[1833]: eth0: Gained carrier Dec 12 18:45:25.285698 systemd-networkd[1833]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:45:25.287800 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 18:45:25.296667 systemd-networkd[1833]: eth0: DHCPv4 address 172.31.25.153/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 12 18:45:25.309182 systemd-resolved[1837]: Positive Trust Anchors: Dec 12 18:45:25.309593 systemd-resolved[1837]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:45:25.309719 systemd-resolved[1837]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:45:25.315639 systemd-resolved[1837]: Defaulting to hostname 'linux'. Dec 12 18:45:25.317802 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:45:25.318752 systemd[1]: Reached target network.target - Network. Dec 12 18:45:25.319568 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:45:25.320262 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:45:25.321016 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 18:45:25.321701 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 18:45:25.322282 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 18:45:25.322836 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 18:45:25.323277 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 18:45:25.323685 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 18:45:25.324043 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 18:45:25.324081 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:45:25.324442 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:45:25.326485 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 18:45:25.328619 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 18:45:25.331240 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 18:45:25.331879 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 18:45:25.332284 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 18:45:25.335642 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 18:45:25.336996 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 18:45:25.338293 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 18:45:25.338857 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 18:45:25.340803 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:45:25.341188 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:45:25.341759 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:45:25.341790 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:45:25.343012 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 18:45:25.348276 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 18:45:25.353469 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 18:45:25.355381 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 18:45:25.358139 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 18:45:25.362641 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 18:45:25.363586 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 18:45:25.367752 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 18:45:25.370311 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 18:45:25.375657 systemd[1]: Started ntpd.service - Network Time Service. Dec 12 18:45:25.385359 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 18:45:25.393620 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 12 18:45:25.394844 jq[1952]: false Dec 12 18:45:25.402868 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 18:45:25.406271 google_oslogin_nss_cache[1954]: oslogin_cache_refresh[1954]: Refreshing passwd entry cache Dec 12 18:45:25.403546 oslogin_cache_refresh[1954]: Refreshing passwd entry cache Dec 12 18:45:25.408139 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 18:45:25.411844 google_oslogin_nss_cache[1954]: oslogin_cache_refresh[1954]: Failure getting users, quitting Dec 12 18:45:25.411844 google_oslogin_nss_cache[1954]: oslogin_cache_refresh[1954]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:45:25.411844 google_oslogin_nss_cache[1954]: oslogin_cache_refresh[1954]: Refreshing group entry cache Dec 12 18:45:25.409681 oslogin_cache_refresh[1954]: Failure getting users, quitting Dec 12 18:45:25.409699 oslogin_cache_refresh[1954]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:45:25.409741 oslogin_cache_refresh[1954]: Refreshing group entry cache Dec 12 18:45:25.413057 google_oslogin_nss_cache[1954]: oslogin_cache_refresh[1954]: Failure getting groups, quitting Dec 12 18:45:25.413057 google_oslogin_nss_cache[1954]: oslogin_cache_refresh[1954]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:45:25.413052 oslogin_cache_refresh[1954]: Failure getting groups, quitting Dec 12 18:45:25.413064 oslogin_cache_refresh[1954]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:45:25.414643 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 18:45:25.417094 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 18:45:25.418064 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 18:45:25.419920 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 18:45:25.427723 extend-filesystems[1953]: Found /dev/nvme0n1p6 Dec 12 18:45:25.425116 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 18:45:25.434705 extend-filesystems[1953]: Found /dev/nvme0n1p9 Dec 12 18:45:25.436728 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 18:45:25.442500 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 18:45:25.451321 update_engine[1967]: I20251212 18:45:25.450891 1967 main.cc:92] Flatcar Update Engine starting Dec 12 18:45:25.455160 extend-filesystems[1953]: Checking size of /dev/nvme0n1p9 Dec 12 18:45:25.458802 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 18:45:25.459373 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 18:45:25.460468 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 18:45:25.469760 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 18:45:25.469951 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 18:45:25.482237 jq[1968]: true Dec 12 18:45:25.484812 coreos-metadata[1949]: Dec 12 18:45:25.480 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 12 18:45:25.484812 coreos-metadata[1949]: Dec 12 18:45:25.483 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 12 18:45:25.484812 coreos-metadata[1949]: Dec 12 18:45:25.483 INFO Fetch successful Dec 12 18:45:25.484812 coreos-metadata[1949]: Dec 12 18:45:25.483 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 12 18:45:25.484812 coreos-metadata[1949]: Dec 12 18:45:25.484 INFO Fetch successful Dec 12 18:45:25.484812 coreos-metadata[1949]: Dec 12 18:45:25.484 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 12 18:45:25.488235 coreos-metadata[1949]: Dec 12 18:45:25.485 INFO Fetch successful Dec 12 18:45:25.488235 coreos-metadata[1949]: Dec 12 18:45:25.485 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 12 18:45:25.488235 coreos-metadata[1949]: Dec 12 18:45:25.486 INFO Fetch successful Dec 12 18:45:25.488235 coreos-metadata[1949]: Dec 12 18:45:25.486 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 12 18:45:25.488235 coreos-metadata[1949]: Dec 12 18:45:25.487 INFO Fetch failed with 404: resource not found Dec 12 18:45:25.488235 coreos-metadata[1949]: Dec 12 18:45:25.487 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 12 18:45:25.490717 coreos-metadata[1949]: Dec 12 18:45:25.489 INFO Fetch successful Dec 12 18:45:25.490717 coreos-metadata[1949]: Dec 12 18:45:25.489 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 12 18:45:25.500839 coreos-metadata[1949]: Dec 12 18:45:25.492 INFO Fetch successful Dec 12 18:45:25.500839 coreos-metadata[1949]: Dec 12 18:45:25.492 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 12 18:45:25.500839 coreos-metadata[1949]: Dec 12 18:45:25.493 INFO Fetch successful Dec 12 18:45:25.500839 coreos-metadata[1949]: Dec 12 18:45:25.493 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 12 18:45:25.500839 coreos-metadata[1949]: Dec 12 18:45:25.494 INFO Fetch successful Dec 12 18:45:25.500839 coreos-metadata[1949]: Dec 12 18:45:25.494 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 12 18:45:25.500839 coreos-metadata[1949]: Dec 12 18:45:25.495 INFO Fetch successful Dec 12 18:45:25.527722 extend-filesystems[1953]: Resized partition /dev/nvme0n1p9 Dec 12 18:45:25.529064 jq[1988]: true Dec 12 18:45:25.526353 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 18:45:25.533277 extend-filesystems[2006]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 18:45:25.530343 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 18:45:25.542622 (ntainerd)[1990]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 18:45:25.552542 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Dec 12 18:45:25.560348 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 18:45:25.561040 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 18:45:25.574569 tar[1975]: linux-amd64/LICENSE Dec 12 18:45:25.575236 tar[1975]: linux-amd64/helm Dec 12 18:45:25.586109 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 12 18:45:25.597816 ntpd[1956]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 12 18:45:25.600186 ntpd[1956]: 12 Dec 18:45:25 ntpd[1956]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 12 18:45:25.600186 ntpd[1956]: 12 Dec 18:45:25 ntpd[1956]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 12 18:45:25.600186 ntpd[1956]: 12 Dec 18:45:25 ntpd[1956]: ---------------------------------------------------- Dec 12 18:45:25.600186 ntpd[1956]: 12 Dec 18:45:25 ntpd[1956]: ntp-4 is maintained by Network Time Foundation, Dec 12 18:45:25.600186 ntpd[1956]: 12 Dec 18:45:25 ntpd[1956]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 12 18:45:25.600186 ntpd[1956]: 12 Dec 18:45:25 ntpd[1956]: corporation. Support and training for ntp-4 are Dec 12 18:45:25.600186 ntpd[1956]: 12 Dec 18:45:25 ntpd[1956]: available at https://www.nwtime.org/support Dec 12 18:45:25.600186 ntpd[1956]: 12 Dec 18:45:25 ntpd[1956]: ---------------------------------------------------- Dec 12 18:45:25.597885 ntpd[1956]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 12 18:45:25.597896 ntpd[1956]: ---------------------------------------------------- Dec 12 18:45:25.604766 ntpd[1956]: 12 Dec 18:45:25 ntpd[1956]: proto: precision = 0.075 usec (-24) Dec 12 18:45:25.597905 ntpd[1956]: ntp-4 is maintained by Network Time Foundation, Dec 12 18:45:25.597914 ntpd[1956]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 12 18:45:25.597924 ntpd[1956]: corporation. Support and training for ntp-4 are Dec 12 18:45:25.597933 ntpd[1956]: available at https://www.nwtime.org/support Dec 12 18:45:25.597942 ntpd[1956]: ---------------------------------------------------- Dec 12 18:45:25.603364 ntpd[1956]: proto: precision = 0.075 usec (-24) Dec 12 18:45:25.608471 ntpd[1956]: 12 Dec 18:45:25 ntpd[1956]: basedate set to 2025-11-30 Dec 12 18:45:25.608471 ntpd[1956]: 12 Dec 18:45:25 ntpd[1956]: gps base set to 2025-11-30 (week 2395) Dec 12 18:45:25.608471 ntpd[1956]: 12 Dec 18:45:25 ntpd[1956]: Listen and drop on 0 v6wildcard [::]:123 Dec 12 18:45:25.608471 ntpd[1956]: 12 Dec 18:45:25 ntpd[1956]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 12 18:45:25.606715 ntpd[1956]: basedate set to 2025-11-30 Dec 12 18:45:25.608882 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 18:45:25.606736 ntpd[1956]: gps base set to 2025-11-30 (week 2395) Dec 12 18:45:25.606867 ntpd[1956]: Listen and drop on 0 v6wildcard [::]:123 Dec 12 18:45:25.606894 ntpd[1956]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 12 18:45:25.608551 dbus-daemon[1950]: [system] SELinux support is enabled Dec 12 18:45:25.628759 kernel: ntpd[1956]: segfault at 24 ip 000055fb5cb07aeb sp 00007ffd33473b60 error 4 in ntpd[68aeb,55fb5caa5000+80000] likely on CPU 0 (core 0, socket 0) Dec 12 18:45:25.628843 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Dec 12 18:45:25.628870 ntpd[1956]: 12 Dec 18:45:25 ntpd[1956]: Listen normally on 2 lo 127.0.0.1:123 Dec 12 18:45:25.628870 ntpd[1956]: 12 Dec 18:45:25 ntpd[1956]: Listen normally on 3 eth0 172.31.25.153:123 Dec 12 18:45:25.628870 ntpd[1956]: 12 Dec 18:45:25 ntpd[1956]: Listen normally on 4 lo [::1]:123 Dec 12 18:45:25.628870 ntpd[1956]: 12 Dec 18:45:25 ntpd[1956]: bind(21) AF_INET6 [fe80::418:2dff:fe7a:98f%2]:123 flags 0x811 failed: Cannot assign requested address Dec 12 18:45:25.628870 ntpd[1956]: 12 Dec 18:45:25 ntpd[1956]: unable to create socket on eth0 (5) for [fe80::418:2dff:fe7a:98f%2]:123 Dec 12 18:45:25.615073 ntpd[1956]: Listen normally on 2 lo 127.0.0.1:123 Dec 12 18:45:25.625954 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 18:45:25.615114 ntpd[1956]: Listen normally on 3 eth0 172.31.25.153:123 Dec 12 18:45:25.625987 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 18:45:25.615147 ntpd[1956]: Listen normally on 4 lo [::1]:123 Dec 12 18:45:25.626753 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 18:45:25.615177 ntpd[1956]: bind(21) AF_INET6 [fe80::418:2dff:fe7a:98f%2]:123 flags 0x811 failed: Cannot assign requested address Dec 12 18:45:25.626775 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 18:45:25.615198 ntpd[1956]: unable to create socket on eth0 (5) for [fe80::418:2dff:fe7a:98f%2]:123 Dec 12 18:45:25.639928 dbus-daemon[1950]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1833 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 12 18:45:25.650556 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 12 18:45:25.653919 systemd[1]: Started update-engine.service - Update Engine. Dec 12 18:45:25.655554 update_engine[1967]: I20251212 18:45:25.654881 1967 update_check_scheduler.cc:74] Next update check in 5m30s Dec 12 18:45:25.675350 systemd-coredump[2037]: Process 1956 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Dec 12 18:45:25.684643 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 18:45:25.695525 systemd-logind[1966]: Watching system buttons on /dev/input/event2 (Power Button) Dec 12 18:45:25.696033 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Dec 12 18:45:25.701809 systemd-logind[1966]: Watching system buttons on /dev/input/event3 (Sleep Button) Dec 12 18:45:25.702096 systemd[1]: Started systemd-coredump@0-2037-0.service - Process Core Dump (PID 2037/UID 0). Dec 12 18:45:25.703457 systemd-logind[1966]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 12 18:45:25.706210 systemd-logind[1966]: New seat seat0. Dec 12 18:45:25.710887 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 18:45:25.735322 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Dec 12 18:45:25.757338 extend-filesystems[2006]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 12 18:45:25.757338 extend-filesystems[2006]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 12 18:45:25.757338 extend-filesystems[2006]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Dec 12 18:45:25.772130 extend-filesystems[1953]: Resized filesystem in /dev/nvme0n1p9 Dec 12 18:45:25.759072 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 18:45:25.790900 bash[2038]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:45:25.765276 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 18:45:25.769477 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 18:45:25.802883 systemd[1]: Starting sshkeys.service... Dec 12 18:45:25.877792 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 18:45:25.882989 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 18:45:26.190895 coreos-metadata[2086]: Dec 12 18:45:26.190 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 12 18:45:26.219534 coreos-metadata[2086]: Dec 12 18:45:26.217 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 12 18:45:26.221837 coreos-metadata[2086]: Dec 12 18:45:26.221 INFO Fetch successful Dec 12 18:45:26.221837 coreos-metadata[2086]: Dec 12 18:45:26.221 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 12 18:45:26.232596 coreos-metadata[2086]: Dec 12 18:45:26.229 INFO Fetch successful Dec 12 18:45:26.239325 unknown[2086]: wrote ssh authorized keys file for user: core Dec 12 18:45:26.286192 locksmithd[2033]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 18:45:26.302896 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 12 18:45:26.305333 dbus-daemon[1950]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 12 18:45:26.307323 dbus-daemon[1950]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2032 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 12 18:45:26.324808 systemd[1]: Starting polkit.service - Authorization Manager... Dec 12 18:45:26.337896 update-ssh-keys[2142]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:45:26.339916 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 18:45:26.344373 containerd[1990]: time="2025-12-12T18:45:26Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 18:45:26.345716 systemd[1]: Finished sshkeys.service. Dec 12 18:45:26.352551 containerd[1990]: time="2025-12-12T18:45:26.348885986Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 18:45:26.413020 containerd[1990]: time="2025-12-12T18:45:26.412966747Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.115µs" Dec 12 18:45:26.413020 containerd[1990]: time="2025-12-12T18:45:26.413014659Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 18:45:26.413172 containerd[1990]: time="2025-12-12T18:45:26.413037385Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 18:45:26.413245 containerd[1990]: time="2025-12-12T18:45:26.413223329Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 18:45:26.413283 containerd[1990]: time="2025-12-12T18:45:26.413252881Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 18:45:26.413318 containerd[1990]: time="2025-12-12T18:45:26.413287713Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:45:26.413388 containerd[1990]: time="2025-12-12T18:45:26.413367907Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:45:26.413435 containerd[1990]: time="2025-12-12T18:45:26.413389654Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:45:26.416746 containerd[1990]: time="2025-12-12T18:45:26.416278462Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:45:26.416746 containerd[1990]: time="2025-12-12T18:45:26.416741784Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:45:26.416997 containerd[1990]: time="2025-12-12T18:45:26.416771359Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:45:26.416997 containerd[1990]: time="2025-12-12T18:45:26.416985109Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 18:45:26.417681 containerd[1990]: time="2025-12-12T18:45:26.417492335Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 18:45:26.419615 containerd[1990]: time="2025-12-12T18:45:26.418943605Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:45:26.419615 containerd[1990]: time="2025-12-12T18:45:26.419145467Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:45:26.419615 containerd[1990]: time="2025-12-12T18:45:26.419166736Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 18:45:26.419615 containerd[1990]: time="2025-12-12T18:45:26.419504672Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 18:45:26.422842 containerd[1990]: time="2025-12-12T18:45:26.421263167Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 18:45:26.422842 containerd[1990]: time="2025-12-12T18:45:26.421720411Z" level=info msg="metadata content store policy set" policy=shared Dec 12 18:45:26.429623 containerd[1990]: time="2025-12-12T18:45:26.428459314Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 18:45:26.429623 containerd[1990]: time="2025-12-12T18:45:26.428565023Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 18:45:26.429623 containerd[1990]: time="2025-12-12T18:45:26.428589481Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 18:45:26.429623 containerd[1990]: time="2025-12-12T18:45:26.428616562Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 18:45:26.429623 containerd[1990]: time="2025-12-12T18:45:26.428641010Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 18:45:26.429623 containerd[1990]: time="2025-12-12T18:45:26.428655197Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 18:45:26.429623 containerd[1990]: time="2025-12-12T18:45:26.428672082Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 18:45:26.429623 containerd[1990]: time="2025-12-12T18:45:26.428689074Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 18:45:26.429623 containerd[1990]: time="2025-12-12T18:45:26.428715463Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 18:45:26.429623 containerd[1990]: time="2025-12-12T18:45:26.428731060Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 18:45:26.429623 containerd[1990]: time="2025-12-12T18:45:26.428744750Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 18:45:26.429623 containerd[1990]: time="2025-12-12T18:45:26.428762616Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 18:45:26.429623 containerd[1990]: time="2025-12-12T18:45:26.428924263Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 18:45:26.429623 containerd[1990]: time="2025-12-12T18:45:26.428953085Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 18:45:26.430167 containerd[1990]: time="2025-12-12T18:45:26.428970996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 18:45:26.430167 containerd[1990]: time="2025-12-12T18:45:26.428986615Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 18:45:26.430167 containerd[1990]: time="2025-12-12T18:45:26.429002414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 18:45:26.430167 containerd[1990]: time="2025-12-12T18:45:26.429015168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 18:45:26.430167 containerd[1990]: time="2025-12-12T18:45:26.429030460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 18:45:26.430167 containerd[1990]: time="2025-12-12T18:45:26.429043930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 18:45:26.430167 containerd[1990]: time="2025-12-12T18:45:26.429058907Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 18:45:26.430167 containerd[1990]: time="2025-12-12T18:45:26.429072875Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 18:45:26.430167 containerd[1990]: time="2025-12-12T18:45:26.429086403Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 18:45:26.430167 containerd[1990]: time="2025-12-12T18:45:26.429158182Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 18:45:26.430167 containerd[1990]: time="2025-12-12T18:45:26.429176433Z" level=info msg="Start snapshots syncer" Dec 12 18:45:26.430167 containerd[1990]: time="2025-12-12T18:45:26.429206255Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 18:45:26.436488 containerd[1990]: time="2025-12-12T18:45:26.431615112Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 18:45:26.436488 containerd[1990]: time="2025-12-12T18:45:26.431698133Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 18:45:26.439700 containerd[1990]: time="2025-12-12T18:45:26.439600532Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 18:45:26.440396 containerd[1990]: time="2025-12-12T18:45:26.439850489Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 18:45:26.440396 containerd[1990]: time="2025-12-12T18:45:26.439921632Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 18:45:26.440396 containerd[1990]: time="2025-12-12T18:45:26.439942679Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 18:45:26.440396 containerd[1990]: time="2025-12-12T18:45:26.439968541Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 18:45:26.440396 containerd[1990]: time="2025-12-12T18:45:26.439988570Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 18:45:26.440396 containerd[1990]: time="2025-12-12T18:45:26.440004346Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 18:45:26.440396 containerd[1990]: time="2025-12-12T18:45:26.440020529Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 18:45:26.440396 containerd[1990]: time="2025-12-12T18:45:26.440058623Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 18:45:26.440396 containerd[1990]: time="2025-12-12T18:45:26.440074196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 18:45:26.440396 containerd[1990]: time="2025-12-12T18:45:26.440090889Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 18:45:26.440396 containerd[1990]: time="2025-12-12T18:45:26.440149534Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:45:26.440396 containerd[1990]: time="2025-12-12T18:45:26.440173351Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:45:26.440396 containerd[1990]: time="2025-12-12T18:45:26.440244418Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:45:26.440961 containerd[1990]: time="2025-12-12T18:45:26.440262508Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:45:26.440961 containerd[1990]: time="2025-12-12T18:45:26.440274203Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 18:45:26.440961 containerd[1990]: time="2025-12-12T18:45:26.440294411Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 18:45:26.440961 containerd[1990]: time="2025-12-12T18:45:26.440316647Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 18:45:26.440961 containerd[1990]: time="2025-12-12T18:45:26.440338288Z" level=info msg="runtime interface created" Dec 12 18:45:26.440961 containerd[1990]: time="2025-12-12T18:45:26.440345427Z" level=info msg="created NRI interface" Dec 12 18:45:26.440961 containerd[1990]: time="2025-12-12T18:45:26.440358141Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 18:45:26.440961 containerd[1990]: time="2025-12-12T18:45:26.440390026Z" level=info msg="Connect containerd service" Dec 12 18:45:26.440961 containerd[1990]: time="2025-12-12T18:45:26.440423046Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 18:45:26.443992 containerd[1990]: time="2025-12-12T18:45:26.443955909Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:45:26.462568 systemd-coredump[2039]: Process 1956 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1956: #0 0x000055fb5cb07aeb n/a (ntpd + 0x68aeb) #1 0x000055fb5cab0cdf n/a (ntpd + 0x11cdf) #2 0x000055fb5cab1575 n/a (ntpd + 0x12575) #3 0x000055fb5caacd8a n/a (ntpd + 0xdd8a) #4 0x000055fb5caae5d3 n/a (ntpd + 0xf5d3) #5 0x000055fb5cab6fd1 n/a (ntpd + 0x17fd1) #6 0x000055fb5caa7c2d n/a (ntpd + 0x8c2d) #7 0x00007f0366f0516c n/a (libc.so.6 + 0x2716c) #8 0x00007f0366f05229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055fb5caa7c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Dec 12 18:45:26.471228 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Dec 12 18:45:26.471411 systemd[1]: ntpd.service: Failed with result 'core-dump'. Dec 12 18:45:26.480752 systemd[1]: systemd-coredump@0-2037-0.service: Deactivated successfully. Dec 12 18:45:26.634388 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Dec 12 18:45:26.641090 systemd[1]: Started ntpd.service - Network Time Service. Dec 12 18:45:26.663037 polkitd[2147]: Started polkitd version 126 Dec 12 18:45:26.673080 polkitd[2147]: Loading rules from directory /etc/polkit-1/rules.d Dec 12 18:45:26.675077 polkitd[2147]: Loading rules from directory /run/polkit-1/rules.d Dec 12 18:45:26.675303 polkitd[2147]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 18:45:26.675984 polkitd[2147]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 12 18:45:26.676019 polkitd[2147]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 18:45:26.676072 polkitd[2147]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 12 18:45:26.676776 polkitd[2147]: Finished loading, compiling and executing 2 rules Dec 12 18:45:26.677059 systemd[1]: Started polkit.service - Authorization Manager. Dec 12 18:45:26.680853 dbus-daemon[1950]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 12 18:45:26.681448 polkitd[2147]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 12 18:45:26.696233 ntpd[2170]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 12 18:45:26.697957 ntpd[2170]: 12 Dec 18:45:26 ntpd[2170]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 12 18:45:26.696316 ntpd[2170]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 12 18:45:26.699609 ntpd[2170]: 12 Dec 18:45:26 ntpd[2170]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 12 18:45:26.699609 ntpd[2170]: 12 Dec 18:45:26 ntpd[2170]: ---------------------------------------------------- Dec 12 18:45:26.699609 ntpd[2170]: 12 Dec 18:45:26 ntpd[2170]: ntp-4 is maintained by Network Time Foundation, Dec 12 18:45:26.699609 ntpd[2170]: 12 Dec 18:45:26 ntpd[2170]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 12 18:45:26.699609 ntpd[2170]: 12 Dec 18:45:26 ntpd[2170]: corporation. Support and training for ntp-4 are Dec 12 18:45:26.699609 ntpd[2170]: 12 Dec 18:45:26 ntpd[2170]: available at https://www.nwtime.org/support Dec 12 18:45:26.699609 ntpd[2170]: 12 Dec 18:45:26 ntpd[2170]: ---------------------------------------------------- Dec 12 18:45:26.699049 ntpd[2170]: ---------------------------------------------------- Dec 12 18:45:26.699994 ntpd[2170]: 12 Dec 18:45:26 ntpd[2170]: proto: precision = 0.098 usec (-23) Dec 12 18:45:26.699059 ntpd[2170]: ntp-4 is maintained by Network Time Foundation, Dec 12 18:45:26.699068 ntpd[2170]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 12 18:45:26.699077 ntpd[2170]: corporation. Support and training for ntp-4 are Dec 12 18:45:26.699085 ntpd[2170]: available at https://www.nwtime.org/support Dec 12 18:45:26.699094 ntpd[2170]: ---------------------------------------------------- Dec 12 18:45:26.700241 ntpd[2170]: 12 Dec 18:45:26 ntpd[2170]: basedate set to 2025-11-30 Dec 12 18:45:26.700241 ntpd[2170]: 12 Dec 18:45:26 ntpd[2170]: gps base set to 2025-11-30 (week 2395) Dec 12 18:45:26.699900 ntpd[2170]: proto: precision = 0.098 usec (-23) Dec 12 18:45:26.700357 ntpd[2170]: 12 Dec 18:45:26 ntpd[2170]: Listen and drop on 0 v6wildcard [::]:123 Dec 12 18:45:26.700357 ntpd[2170]: 12 Dec 18:45:26 ntpd[2170]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 12 18:45:26.700167 ntpd[2170]: basedate set to 2025-11-30 Dec 12 18:45:26.700179 ntpd[2170]: gps base set to 2025-11-30 (week 2395) Dec 12 18:45:26.700272 ntpd[2170]: Listen and drop on 0 v6wildcard [::]:123 Dec 12 18:45:26.700301 ntpd[2170]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 12 18:45:26.700641 ntpd[2170]: 12 Dec 18:45:26 ntpd[2170]: Listen normally on 2 lo 127.0.0.1:123 Dec 12 18:45:26.700641 ntpd[2170]: 12 Dec 18:45:26 ntpd[2170]: Listen normally on 3 eth0 172.31.25.153:123 Dec 12 18:45:26.700641 ntpd[2170]: 12 Dec 18:45:26 ntpd[2170]: Listen normally on 4 lo [::1]:123 Dec 12 18:45:26.700641 ntpd[2170]: 12 Dec 18:45:26 ntpd[2170]: bind(21) AF_INET6 [fe80::418:2dff:fe7a:98f%2]:123 flags 0x811 failed: Cannot assign requested address Dec 12 18:45:26.700505 ntpd[2170]: Listen normally on 2 lo 127.0.0.1:123 Dec 12 18:45:26.700828 ntpd[2170]: 12 Dec 18:45:26 ntpd[2170]: unable to create socket on eth0 (5) for [fe80::418:2dff:fe7a:98f%2]:123 Dec 12 18:45:26.700566 ntpd[2170]: Listen normally on 3 eth0 172.31.25.153:123 Dec 12 18:45:26.700595 ntpd[2170]: Listen normally on 4 lo [::1]:123 Dec 12 18:45:26.700630 ntpd[2170]: bind(21) AF_INET6 [fe80::418:2dff:fe7a:98f%2]:123 flags 0x811 failed: Cannot assign requested address Dec 12 18:45:26.700652 ntpd[2170]: unable to create socket on eth0 (5) for [fe80::418:2dff:fe7a:98f%2]:123 Dec 12 18:45:26.710431 kernel: ntpd[2170]: segfault at 24 ip 000055f9656f5aeb sp 00007ffe0a7e0590 error 4 in ntpd[68aeb,55f965693000+80000] likely on CPU 0 (core 0, socket 0) Dec 12 18:45:26.710570 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Dec 12 18:45:26.713053 systemd-coredump[2183]: Process 2170 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Dec 12 18:45:26.721123 systemd[1]: Started systemd-coredump@1-2183-0.service - Process Core Dump (PID 2183/UID 0). Dec 12 18:45:26.737148 systemd-hostnamed[2032]: Hostname set to (transient) Dec 12 18:45:26.738389 systemd-resolved[1837]: System hostname changed to 'ip-172-31-25-153'. Dec 12 18:45:26.755453 containerd[1990]: time="2025-12-12T18:45:26.754965165Z" level=info msg="Start subscribing containerd event" Dec 12 18:45:26.755453 containerd[1990]: time="2025-12-12T18:45:26.755032852Z" level=info msg="Start recovering state" Dec 12 18:45:26.755453 containerd[1990]: time="2025-12-12T18:45:26.755158372Z" level=info msg="Start event monitor" Dec 12 18:45:26.755453 containerd[1990]: time="2025-12-12T18:45:26.755174070Z" level=info msg="Start cni network conf syncer for default" Dec 12 18:45:26.755453 containerd[1990]: time="2025-12-12T18:45:26.755187154Z" level=info msg="Start streaming server" Dec 12 18:45:26.755453 containerd[1990]: time="2025-12-12T18:45:26.755197965Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 18:45:26.755453 containerd[1990]: time="2025-12-12T18:45:26.755207866Z" level=info msg="runtime interface starting up..." Dec 12 18:45:26.755453 containerd[1990]: time="2025-12-12T18:45:26.755216233Z" level=info msg="starting plugins..." Dec 12 18:45:26.755453 containerd[1990]: time="2025-12-12T18:45:26.755231416Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 18:45:26.760776 containerd[1990]: time="2025-12-12T18:45:26.757849591Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 18:45:26.760776 containerd[1990]: time="2025-12-12T18:45:26.757936260Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 18:45:26.760776 containerd[1990]: time="2025-12-12T18:45:26.758012018Z" level=info msg="containerd successfully booted in 0.416114s" Dec 12 18:45:26.758148 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 18:45:26.896548 systemd-networkd[1833]: eth0: Gained IPv6LL Dec 12 18:45:26.900704 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 18:45:26.905576 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 18:45:26.910694 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 12 18:45:26.916828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:45:26.923969 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 18:45:27.010155 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 18:45:27.014726 systemd-coredump[2184]: Process 2170 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 2170: #0 0x000055f9656f5aeb n/a (ntpd + 0x68aeb) #1 0x000055f96569ecdf n/a (ntpd + 0x11cdf) #2 0x000055f96569f575 n/a (ntpd + 0x12575) #3 0x000055f96569ad8a n/a (ntpd + 0xdd8a) #4 0x000055f96569c5d3 n/a (ntpd + 0xf5d3) #5 0x000055f9656a4fd1 n/a (ntpd + 0x17fd1) #6 0x000055f965695c2d n/a (ntpd + 0x8c2d) #7 0x00007f56bf19d16c n/a (libc.so.6 + 0x2716c) #8 0x00007f56bf19d229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055f965695c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Dec 12 18:45:27.023761 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Dec 12 18:45:27.023941 systemd[1]: ntpd.service: Failed with result 'core-dump'. Dec 12 18:45:27.037983 systemd[1]: systemd-coredump@1-2183-0.service: Deactivated successfully. Dec 12 18:45:27.125372 sshd_keygen[2003]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 18:45:27.152504 amazon-ssm-agent[2187]: Initializing new seelog logger Dec 12 18:45:27.154225 amazon-ssm-agent[2187]: New Seelog Logger Creation Complete Dec 12 18:45:27.154472 amazon-ssm-agent[2187]: 2025/12/12 18:45:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 18:45:27.154472 amazon-ssm-agent[2187]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 18:45:27.154700 tar[1975]: linux-amd64/README.md Dec 12 18:45:27.155234 amazon-ssm-agent[2187]: 2025/12/12 18:45:27 processing appconfig overrides Dec 12 18:45:27.159123 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 2. Dec 12 18:45:27.161542 amazon-ssm-agent[2187]: 2025/12/12 18:45:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 18:45:27.161542 amazon-ssm-agent[2187]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 18:45:27.161542 amazon-ssm-agent[2187]: 2025/12/12 18:45:27 processing appconfig overrides Dec 12 18:45:27.161542 amazon-ssm-agent[2187]: 2025/12/12 18:45:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 18:45:27.161542 amazon-ssm-agent[2187]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 18:45:27.161542 amazon-ssm-agent[2187]: 2025/12/12 18:45:27 processing appconfig overrides Dec 12 18:45:27.161542 amazon-ssm-agent[2187]: 2025-12-12 18:45:27.1589 INFO Proxy environment variables: Dec 12 18:45:27.166614 amazon-ssm-agent[2187]: 2025/12/12 18:45:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 18:45:27.166614 amazon-ssm-agent[2187]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 18:45:27.166614 amazon-ssm-agent[2187]: 2025/12/12 18:45:27 processing appconfig overrides Dec 12 18:45:27.167369 systemd[1]: Started ntpd.service - Network Time Service. Dec 12 18:45:27.178640 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 18:45:27.186283 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 18:45:27.202394 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 18:45:27.209077 ntpd[2222]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 12 18:45:27.209873 ntpd[2222]: 12 Dec 18:45:27 ntpd[2222]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 12 18:45:27.211547 ntpd[2222]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 12 18:45:27.214043 ntpd[2222]: 12 Dec 18:45:27 ntpd[2222]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 12 18:45:27.214043 ntpd[2222]: 12 Dec 18:45:27 ntpd[2222]: ---------------------------------------------------- Dec 12 18:45:27.214043 ntpd[2222]: 12 Dec 18:45:27 ntpd[2222]: ntp-4 is maintained by Network Time Foundation, Dec 12 18:45:27.214043 ntpd[2222]: 12 Dec 18:45:27 ntpd[2222]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 12 18:45:27.214043 ntpd[2222]: 12 Dec 18:45:27 ntpd[2222]: corporation. Support and training for ntp-4 are Dec 12 18:45:27.214043 ntpd[2222]: 12 Dec 18:45:27 ntpd[2222]: available at https://www.nwtime.org/support Dec 12 18:45:27.214043 ntpd[2222]: 12 Dec 18:45:27 ntpd[2222]: ---------------------------------------------------- Dec 12 18:45:27.214043 ntpd[2222]: 12 Dec 18:45:27 ntpd[2222]: proto: precision = 0.065 usec (-24) Dec 12 18:45:27.211573 ntpd[2222]: ---------------------------------------------------- Dec 12 18:45:27.211582 ntpd[2222]: ntp-4 is maintained by Network Time Foundation, Dec 12 18:45:27.211591 ntpd[2222]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 12 18:45:27.211600 ntpd[2222]: corporation. Support and training for ntp-4 are Dec 12 18:45:27.211609 ntpd[2222]: available at https://www.nwtime.org/support Dec 12 18:45:27.215683 ntpd[2222]: 12 Dec 18:45:27 ntpd[2222]: basedate set to 2025-11-30 Dec 12 18:45:27.215683 ntpd[2222]: 12 Dec 18:45:27 ntpd[2222]: gps base set to 2025-11-30 (week 2395) Dec 12 18:45:27.215683 ntpd[2222]: 12 Dec 18:45:27 ntpd[2222]: Listen and drop on 0 v6wildcard [::]:123 Dec 12 18:45:27.215683 ntpd[2222]: 12 Dec 18:45:27 ntpd[2222]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 12 18:45:27.215683 ntpd[2222]: 12 Dec 18:45:27 ntpd[2222]: Listen normally on 2 lo 127.0.0.1:123 Dec 12 18:45:27.215683 ntpd[2222]: 12 Dec 18:45:27 ntpd[2222]: Listen normally on 3 eth0 172.31.25.153:123 Dec 12 18:45:27.215683 ntpd[2222]: 12 Dec 18:45:27 ntpd[2222]: Listen normally on 4 lo [::1]:123 Dec 12 18:45:27.215683 ntpd[2222]: 12 Dec 18:45:27 ntpd[2222]: Listen normally on 5 eth0 [fe80::418:2dff:fe7a:98f%2]:123 Dec 12 18:45:27.215683 ntpd[2222]: 12 Dec 18:45:27 ntpd[2222]: Listening on routing socket on fd #22 for interface updates Dec 12 18:45:27.211618 ntpd[2222]: ---------------------------------------------------- Dec 12 18:45:27.212386 ntpd[2222]: proto: precision = 0.065 usec (-24) Dec 12 18:45:27.214692 ntpd[2222]: basedate set to 2025-11-30 Dec 12 18:45:27.214710 ntpd[2222]: gps base set to 2025-11-30 (week 2395) Dec 12 18:45:27.214829 ntpd[2222]: Listen and drop on 0 v6wildcard [::]:123 Dec 12 18:45:27.214860 ntpd[2222]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 12 18:45:27.215050 ntpd[2222]: Listen normally on 2 lo 127.0.0.1:123 Dec 12 18:45:27.215080 ntpd[2222]: Listen normally on 3 eth0 172.31.25.153:123 Dec 12 18:45:27.215108 ntpd[2222]: Listen normally on 4 lo [::1]:123 Dec 12 18:45:27.215134 ntpd[2222]: Listen normally on 5 eth0 [fe80::418:2dff:fe7a:98f%2]:123 Dec 12 18:45:27.215161 ntpd[2222]: Listening on routing socket on fd #22 for interface updates Dec 12 18:45:27.219490 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 18:45:27.219836 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 18:45:27.223887 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 18:45:27.228122 ntpd[2222]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 12 18:45:27.228161 ntpd[2222]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 12 18:45:27.228332 ntpd[2222]: 12 Dec 18:45:27 ntpd[2222]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 12 18:45:27.228332 ntpd[2222]: 12 Dec 18:45:27 ntpd[2222]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 12 18:45:27.254063 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 18:45:27.262008 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 18:45:27.266583 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 18:45:27.269012 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 18:45:27.278625 amazon-ssm-agent[2187]: 2025-12-12 18:45:27.1602 INFO https_proxy: Dec 12 18:45:27.379137 amazon-ssm-agent[2187]: 2025-12-12 18:45:27.1602 INFO http_proxy: Dec 12 18:45:27.479694 amazon-ssm-agent[2187]: 2025-12-12 18:45:27.1602 INFO no_proxy: Dec 12 18:45:27.579678 amazon-ssm-agent[2187]: 2025-12-12 18:45:27.1604 INFO Checking if agent identity type OnPrem can be assumed Dec 12 18:45:27.678445 amazon-ssm-agent[2187]: 2025-12-12 18:45:27.1607 INFO Checking if agent identity type EC2 can be assumed Dec 12 18:45:27.714584 amazon-ssm-agent[2187]: 2025/12/12 18:45:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 18:45:27.715702 amazon-ssm-agent[2187]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 18:45:27.716073 amazon-ssm-agent[2187]: 2025/12/12 18:45:27 processing appconfig overrides Dec 12 18:45:27.741803 amazon-ssm-agent[2187]: 2025-12-12 18:45:27.2813 INFO Agent will take identity from EC2 Dec 12 18:45:27.741921 amazon-ssm-agent[2187]: 2025-12-12 18:45:27.2825 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Dec 12 18:45:27.741921 amazon-ssm-agent[2187]: 2025-12-12 18:45:27.2825 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Dec 12 18:45:27.741921 amazon-ssm-agent[2187]: 2025-12-12 18:45:27.2825 INFO [amazon-ssm-agent] Starting Core Agent Dec 12 18:45:27.741921 amazon-ssm-agent[2187]: 2025-12-12 18:45:27.2825 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Dec 12 18:45:27.741921 amazon-ssm-agent[2187]: 2025-12-12 18:45:27.2825 INFO [Registrar] Starting registrar module Dec 12 18:45:27.741921 amazon-ssm-agent[2187]: 2025-12-12 18:45:27.2838 INFO [EC2Identity] Checking disk for registration info Dec 12 18:45:27.741921 amazon-ssm-agent[2187]: 2025-12-12 18:45:27.2839 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Dec 12 18:45:27.741921 amazon-ssm-agent[2187]: 2025-12-12 18:45:27.2839 INFO [EC2Identity] Generating registration keypair Dec 12 18:45:27.741921 amazon-ssm-agent[2187]: 2025-12-12 18:45:27.6516 INFO [EC2Identity] Checking write access before registering Dec 12 18:45:27.741921 amazon-ssm-agent[2187]: 2025-12-12 18:45:27.6520 INFO [EC2Identity] Registering EC2 instance with Systems Manager Dec 12 18:45:27.742144 amazon-ssm-agent[2187]: 2025-12-12 18:45:27.7139 INFO [EC2Identity] EC2 registration was successful. Dec 12 18:45:27.742144 amazon-ssm-agent[2187]: 2025-12-12 18:45:27.7143 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Dec 12 18:45:27.742144 amazon-ssm-agent[2187]: 2025-12-12 18:45:27.7144 INFO [CredentialRefresher] credentialRefresher has started Dec 12 18:45:27.742144 amazon-ssm-agent[2187]: 2025-12-12 18:45:27.7144 INFO [CredentialRefresher] Starting credentials refresher loop Dec 12 18:45:27.742144 amazon-ssm-agent[2187]: 2025-12-12 18:45:27.7415 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 12 18:45:27.742144 amazon-ssm-agent[2187]: 2025-12-12 18:45:27.7417 INFO [CredentialRefresher] Credentials ready Dec 12 18:45:27.777805 amazon-ssm-agent[2187]: 2025-12-12 18:45:27.7422 INFO [CredentialRefresher] Next credential rotation will be in 29.9999881484 minutes Dec 12 18:45:28.193618 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 18:45:28.196117 systemd[1]: Started sshd@0-172.31.25.153:22-139.178.89.65:51374.service - OpenSSH per-connection server daemon (139.178.89.65:51374). Dec 12 18:45:28.416053 sshd[2238]: Accepted publickey for core from 139.178.89.65 port 51374 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:45:28.421409 sshd-session[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:28.437471 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 18:45:28.440902 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 18:45:28.446715 systemd-logind[1966]: New session 1 of user core. Dec 12 18:45:28.466667 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 18:45:28.470339 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 18:45:28.485883 (systemd)[2243]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 18:45:28.488969 systemd-logind[1966]: New session c1 of user core. Dec 12 18:45:28.682588 systemd[2243]: Queued start job for default target default.target. Dec 12 18:45:28.692837 systemd[2243]: Created slice app.slice - User Application Slice. Dec 12 18:45:28.692885 systemd[2243]: Reached target paths.target - Paths. Dec 12 18:45:28.693037 systemd[2243]: Reached target timers.target - Timers. Dec 12 18:45:28.694873 systemd[2243]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 18:45:28.708613 systemd[2243]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 18:45:28.709043 systemd[2243]: Reached target sockets.target - Sockets. Dec 12 18:45:28.709124 systemd[2243]: Reached target basic.target - Basic System. Dec 12 18:45:28.709175 systemd[2243]: Reached target default.target - Main User Target. Dec 12 18:45:28.709221 systemd[2243]: Startup finished in 210ms. Dec 12 18:45:28.709366 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 18:45:28.716812 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 18:45:28.754368 amazon-ssm-agent[2187]: 2025-12-12 18:45:28.7540 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 12 18:45:28.857806 amazon-ssm-agent[2187]: 2025-12-12 18:45:28.7626 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2254) started Dec 12 18:45:28.881540 systemd[1]: Started sshd@1-172.31.25.153:22-139.178.89.65:51684.service - OpenSSH per-connection server daemon (139.178.89.65:51684). Dec 12 18:45:28.957702 amazon-ssm-agent[2187]: 2025-12-12 18:45:28.7626 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 12 18:45:29.077437 sshd[2262]: Accepted publickey for core from 139.178.89.65 port 51684 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:45:29.078775 sshd-session[2262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:29.085440 systemd-logind[1966]: New session 2 of user core. Dec 12 18:45:29.088701 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 18:45:29.189739 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:45:29.191630 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 18:45:29.193575 systemd[1]: Startup finished in 2.655s (kernel) + 6.658s (initrd) + 7.958s (userspace) = 17.272s. Dec 12 18:45:29.198576 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:45:29.214301 sshd[2271]: Connection closed by 139.178.89.65 port 51684 Dec 12 18:45:29.214779 sshd-session[2262]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:29.218309 systemd[1]: sshd@1-172.31.25.153:22-139.178.89.65:51684.service: Deactivated successfully. Dec 12 18:45:29.220548 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 18:45:29.223062 systemd-logind[1966]: Session 2 logged out. Waiting for processes to exit. Dec 12 18:45:29.224243 systemd-logind[1966]: Removed session 2. Dec 12 18:45:29.256666 systemd[1]: Started sshd@2-172.31.25.153:22-139.178.89.65:51690.service - OpenSSH per-connection server daemon (139.178.89.65:51690). Dec 12 18:45:29.442571 sshd[2287]: Accepted publickey for core from 139.178.89.65 port 51690 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:45:29.444692 sshd-session[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:29.451243 systemd-logind[1966]: New session 3 of user core. Dec 12 18:45:29.456748 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 18:45:29.582301 sshd[2294]: Connection closed by 139.178.89.65 port 51690 Dec 12 18:45:29.582872 sshd-session[2287]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:29.586895 systemd[1]: sshd@2-172.31.25.153:22-139.178.89.65:51690.service: Deactivated successfully. Dec 12 18:45:29.591181 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 18:45:29.594137 systemd-logind[1966]: Session 3 logged out. Waiting for processes to exit. Dec 12 18:45:29.595298 systemd-logind[1966]: Removed session 3. Dec 12 18:45:30.297696 kubelet[2278]: E1212 18:45:30.297614 2278 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:45:30.300542 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:45:30.300695 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:45:30.301001 systemd[1]: kubelet.service: Consumed 1.103s CPU time, 269.2M memory peak. Dec 12 18:45:35.861650 systemd-resolved[1837]: Clock change detected. Flushing caches. Dec 12 18:45:41.265884 systemd[1]: Started sshd@3-172.31.25.153:22-139.178.89.65:34256.service - OpenSSH per-connection server daemon (139.178.89.65:34256). Dec 12 18:45:41.433817 sshd[2302]: Accepted publickey for core from 139.178.89.65 port 34256 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:45:41.435631 sshd-session[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:41.441760 systemd-logind[1966]: New session 4 of user core. Dec 12 18:45:41.448541 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 18:45:41.565835 sshd[2305]: Connection closed by 139.178.89.65 port 34256 Dec 12 18:45:41.566702 sshd-session[2302]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:41.571473 systemd[1]: sshd@3-172.31.25.153:22-139.178.89.65:34256.service: Deactivated successfully. Dec 12 18:45:41.573639 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 18:45:41.574742 systemd-logind[1966]: Session 4 logged out. Waiting for processes to exit. Dec 12 18:45:41.576641 systemd-logind[1966]: Removed session 4. Dec 12 18:45:41.600245 systemd[1]: Started sshd@4-172.31.25.153:22-139.178.89.65:34264.service - OpenSSH per-connection server daemon (139.178.89.65:34264). Dec 12 18:45:41.774420 sshd[2311]: Accepted publickey for core from 139.178.89.65 port 34264 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:45:41.775719 sshd-session[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:41.781350 systemd-logind[1966]: New session 5 of user core. Dec 12 18:45:41.787310 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 18:45:41.907175 sshd[2314]: Connection closed by 139.178.89.65 port 34264 Dec 12 18:45:41.908102 sshd-session[2311]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:41.911685 systemd[1]: sshd@4-172.31.25.153:22-139.178.89.65:34264.service: Deactivated successfully. Dec 12 18:45:41.913594 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 18:45:41.915362 systemd-logind[1966]: Session 5 logged out. Waiting for processes to exit. Dec 12 18:45:41.916935 systemd-logind[1966]: Removed session 5. Dec 12 18:45:41.941965 systemd[1]: Started sshd@5-172.31.25.153:22-139.178.89.65:34274.service - OpenSSH per-connection server daemon (139.178.89.65:34274). Dec 12 18:45:41.974698 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 18:45:41.978201 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:45:42.140027 sshd[2320]: Accepted publickey for core from 139.178.89.65 port 34274 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:45:42.142028 sshd-session[2320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:42.148939 systemd-logind[1966]: New session 6 of user core. Dec 12 18:45:42.159303 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 18:45:42.282896 sshd[2326]: Connection closed by 139.178.89.65 port 34274 Dec 12 18:45:42.283591 sshd-session[2320]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:42.288137 systemd[1]: sshd@5-172.31.25.153:22-139.178.89.65:34274.service: Deactivated successfully. Dec 12 18:45:42.290132 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 18:45:42.291935 systemd-logind[1966]: Session 6 logged out. Waiting for processes to exit. Dec 12 18:45:42.294025 systemd-logind[1966]: Removed session 6. Dec 12 18:45:42.319384 systemd[1]: Started sshd@6-172.31.25.153:22-139.178.89.65:34280.service - OpenSSH per-connection server daemon (139.178.89.65:34280). Dec 12 18:45:42.491173 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:45:42.502469 (kubelet)[2340]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:45:42.506552 sshd[2332]: Accepted publickey for core from 139.178.89.65 port 34280 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:45:42.509026 sshd-session[2332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:42.519411 systemd-logind[1966]: New session 7 of user core. Dec 12 18:45:42.523262 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 18:45:42.559864 kubelet[2340]: E1212 18:45:42.559813 2340 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:45:42.564245 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:45:42.564505 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:45:42.565243 systemd[1]: kubelet.service: Consumed 195ms CPU time, 108.7M memory peak. Dec 12 18:45:42.659231 sudo[2347]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 18:45:42.659628 sudo[2347]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:45:42.671383 sudo[2347]: pam_unix(sudo:session): session closed for user root Dec 12 18:45:42.694833 sshd[2345]: Connection closed by 139.178.89.65 port 34280 Dec 12 18:45:42.695836 sshd-session[2332]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:42.701664 systemd[1]: sshd@6-172.31.25.153:22-139.178.89.65:34280.service: Deactivated successfully. Dec 12 18:45:42.703669 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 18:45:42.704642 systemd-logind[1966]: Session 7 logged out. Waiting for processes to exit. Dec 12 18:45:42.706735 systemd-logind[1966]: Removed session 7. Dec 12 18:45:42.731900 systemd[1]: Started sshd@7-172.31.25.153:22-139.178.89.65:34294.service - OpenSSH per-connection server daemon (139.178.89.65:34294). Dec 12 18:45:42.912399 sshd[2353]: Accepted publickey for core from 139.178.89.65 port 34294 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:45:42.913996 sshd-session[2353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:42.919972 systemd-logind[1966]: New session 8 of user core. Dec 12 18:45:42.929380 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 18:45:43.027525 sudo[2358]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 18:45:43.027888 sudo[2358]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:45:43.037480 sudo[2358]: pam_unix(sudo:session): session closed for user root Dec 12 18:45:43.045346 sudo[2357]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 18:45:43.045721 sudo[2357]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:45:43.057201 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:45:43.112664 augenrules[2380]: No rules Dec 12 18:45:43.114421 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:45:43.114696 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:45:43.115790 sudo[2357]: pam_unix(sudo:session): session closed for user root Dec 12 18:45:43.139364 sshd[2356]: Connection closed by 139.178.89.65 port 34294 Dec 12 18:45:43.140182 sshd-session[2353]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:43.146260 systemd[1]: sshd@7-172.31.25.153:22-139.178.89.65:34294.service: Deactivated successfully. Dec 12 18:45:43.148284 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 18:45:43.149687 systemd-logind[1966]: Session 8 logged out. Waiting for processes to exit. Dec 12 18:45:43.151550 systemd-logind[1966]: Removed session 8. Dec 12 18:45:43.175011 systemd[1]: Started sshd@8-172.31.25.153:22-139.178.89.65:34300.service - OpenSSH per-connection server daemon (139.178.89.65:34300). Dec 12 18:45:43.381638 sshd[2389]: Accepted publickey for core from 139.178.89.65 port 34300 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:45:43.383007 sshd-session[2389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:43.388414 systemd-logind[1966]: New session 9 of user core. Dec 12 18:45:43.400300 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 18:45:43.499393 sudo[2393]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 18:45:43.499674 sudo[2393]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:45:44.202816 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 18:45:44.221567 (dockerd)[2411]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 18:45:44.659151 dockerd[2411]: time="2025-12-12T18:45:44.658789878Z" level=info msg="Starting up" Dec 12 18:45:44.664300 dockerd[2411]: time="2025-12-12T18:45:44.664249519Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 18:45:44.676724 dockerd[2411]: time="2025-12-12T18:45:44.676643531Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 18:45:44.710743 systemd[1]: var-lib-docker-metacopy\x2dcheck3410738098-merged.mount: Deactivated successfully. Dec 12 18:45:44.731305 dockerd[2411]: time="2025-12-12T18:45:44.731261047Z" level=info msg="Loading containers: start." Dec 12 18:45:44.746066 kernel: Initializing XFRM netlink socket Dec 12 18:45:45.026275 (udev-worker)[2432]: Network interface NamePolicy= disabled on kernel command line. Dec 12 18:45:45.134171 systemd-networkd[1833]: docker0: Link UP Dec 12 18:45:45.143208 dockerd[2411]: time="2025-12-12T18:45:45.143151234Z" level=info msg="Loading containers: done." Dec 12 18:45:45.163637 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4277021716-merged.mount: Deactivated successfully. Dec 12 18:45:45.168095 dockerd[2411]: time="2025-12-12T18:45:45.168017909Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 18:45:45.168291 dockerd[2411]: time="2025-12-12T18:45:45.168149568Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 18:45:45.168291 dockerd[2411]: time="2025-12-12T18:45:45.168266131Z" level=info msg="Initializing buildkit" Dec 12 18:45:45.199971 dockerd[2411]: time="2025-12-12T18:45:45.199928042Z" level=info msg="Completed buildkit initialization" Dec 12 18:45:45.209568 dockerd[2411]: time="2025-12-12T18:45:45.209510561Z" level=info msg="Daemon has completed initialization" Dec 12 18:45:45.209780 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 18:45:45.210275 dockerd[2411]: time="2025-12-12T18:45:45.209745383Z" level=info msg="API listen on /run/docker.sock" Dec 12 18:45:46.423874 containerd[1990]: time="2025-12-12T18:45:46.423831560Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 12 18:45:47.022130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4149371721.mount: Deactivated successfully. Dec 12 18:45:48.675961 containerd[1990]: time="2025-12-12T18:45:48.675902228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:48.677299 containerd[1990]: time="2025-12-12T18:45:48.677116403Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Dec 12 18:45:48.678367 containerd[1990]: time="2025-12-12T18:45:48.678328913Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:48.681716 containerd[1990]: time="2025-12-12T18:45:48.681662300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:48.686060 containerd[1990]: time="2025-12-12T18:45:48.685805938Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.261928987s" Dec 12 18:45:48.686060 containerd[1990]: time="2025-12-12T18:45:48.685860415Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 12 18:45:48.690995 containerd[1990]: time="2025-12-12T18:45:48.690181864Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 12 18:45:50.768507 containerd[1990]: time="2025-12-12T18:45:50.768448632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:50.822786 containerd[1990]: time="2025-12-12T18:45:50.822728111Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Dec 12 18:45:50.868215 containerd[1990]: time="2025-12-12T18:45:50.868153068Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:50.929958 containerd[1990]: time="2025-12-12T18:45:50.929891489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:50.930762 containerd[1990]: time="2025-12-12T18:45:50.930672128Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 2.240447658s" Dec 12 18:45:50.930762 containerd[1990]: time="2025-12-12T18:45:50.930706536Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 12 18:45:50.931121 containerd[1990]: time="2025-12-12T18:45:50.931093294Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 12 18:45:52.348852 containerd[1990]: time="2025-12-12T18:45:52.348765060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:52.350618 containerd[1990]: time="2025-12-12T18:45:52.350565827Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Dec 12 18:45:52.352352 containerd[1990]: time="2025-12-12T18:45:52.352089418Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:52.355496 containerd[1990]: time="2025-12-12T18:45:52.355452149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:52.356560 containerd[1990]: time="2025-12-12T18:45:52.356529462Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.425412001s" Dec 12 18:45:52.356742 containerd[1990]: time="2025-12-12T18:45:52.356656774Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 12 18:45:52.357934 containerd[1990]: time="2025-12-12T18:45:52.357905891Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 12 18:45:52.814865 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 18:45:52.816868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:45:53.689480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:45:53.709094 (kubelet)[2701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:45:53.745229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1451185284.mount: Deactivated successfully. Dec 12 18:45:53.772690 kubelet[2701]: E1212 18:45:53.772650 2701 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:45:53.775882 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:45:53.776020 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:45:53.778338 systemd[1]: kubelet.service: Consumed 204ms CPU time, 110.1M memory peak. Dec 12 18:45:54.364246 containerd[1990]: time="2025-12-12T18:45:54.364161236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:54.365366 containerd[1990]: time="2025-12-12T18:45:54.365262699Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Dec 12 18:45:54.367069 containerd[1990]: time="2025-12-12T18:45:54.366761589Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:54.369443 containerd[1990]: time="2025-12-12T18:45:54.369379480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:54.370404 containerd[1990]: time="2025-12-12T18:45:54.369995238Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 2.01205413s" Dec 12 18:45:54.370404 containerd[1990]: time="2025-12-12T18:45:54.370052307Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 12 18:45:54.370733 containerd[1990]: time="2025-12-12T18:45:54.370697627Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 12 18:45:54.913191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount182136767.mount: Deactivated successfully. Dec 12 18:45:56.053293 containerd[1990]: time="2025-12-12T18:45:56.053199904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:56.054603 containerd[1990]: time="2025-12-12T18:45:56.054531048Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Dec 12 18:45:56.057489 containerd[1990]: time="2025-12-12T18:45:56.057421454Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:56.060648 containerd[1990]: time="2025-12-12T18:45:56.060311833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:56.061404 containerd[1990]: time="2025-12-12T18:45:56.061367847Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.690638022s" Dec 12 18:45:56.061404 containerd[1990]: time="2025-12-12T18:45:56.061406067Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 12 18:45:56.062354 containerd[1990]: time="2025-12-12T18:45:56.062300999Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 18:45:56.547932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3857166759.mount: Deactivated successfully. Dec 12 18:45:56.555735 containerd[1990]: time="2025-12-12T18:45:56.555680878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:45:56.556568 containerd[1990]: time="2025-12-12T18:45:56.556427373Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 12 18:45:56.557436 containerd[1990]: time="2025-12-12T18:45:56.557396836Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:45:56.559949 containerd[1990]: time="2025-12-12T18:45:56.559895884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:45:56.560561 containerd[1990]: time="2025-12-12T18:45:56.560422068Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 498.08957ms" Dec 12 18:45:56.560561 containerd[1990]: time="2025-12-12T18:45:56.560451492Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 12 18:45:56.561472 containerd[1990]: time="2025-12-12T18:45:56.561299696Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 12 18:45:57.137016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3501315079.mount: Deactivated successfully. Dec 12 18:45:58.412580 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 12 18:46:00.037493 containerd[1990]: time="2025-12-12T18:46:00.037402516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:00.044398 containerd[1990]: time="2025-12-12T18:46:00.044215932Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Dec 12 18:46:00.050132 containerd[1990]: time="2025-12-12T18:46:00.046815610Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:00.065776 containerd[1990]: time="2025-12-12T18:46:00.065621919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:00.068573 containerd[1990]: time="2025-12-12T18:46:00.068368158Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.507035694s" Dec 12 18:46:00.068573 containerd[1990]: time="2025-12-12T18:46:00.068574454Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 12 18:46:04.030231 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 12 18:46:04.034284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:46:04.731284 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:46:04.755958 (kubelet)[2855]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:46:04.927423 kubelet[2855]: E1212 18:46:04.927332 2855 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:46:04.931736 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:46:04.931919 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:46:04.937676 systemd[1]: kubelet.service: Consumed 235ms CPU time, 109M memory peak. Dec 12 18:46:06.935637 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:46:06.935894 systemd[1]: kubelet.service: Consumed 235ms CPU time, 109M memory peak. Dec 12 18:46:06.939200 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:46:06.977423 systemd[1]: Reload requested from client PID 2869 ('systemctl') (unit session-9.scope)... Dec 12 18:46:06.977446 systemd[1]: Reloading... Dec 12 18:46:07.130198 zram_generator::config[2913]: No configuration found. Dec 12 18:46:07.495875 systemd[1]: Reloading finished in 517 ms. Dec 12 18:46:07.618728 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 18:46:07.618857 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 18:46:07.619265 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:46:07.619333 systemd[1]: kubelet.service: Consumed 153ms CPU time, 98.4M memory peak. Dec 12 18:46:07.630677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:46:08.010522 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:46:08.032139 (kubelet)[2977]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:46:08.180024 kubelet[2977]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:46:08.180024 kubelet[2977]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:46:08.180024 kubelet[2977]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:46:08.184471 kubelet[2977]: I1212 18:46:08.184355 2977 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:46:09.395017 kubelet[2977]: I1212 18:46:09.394965 2977 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 18:46:09.395017 kubelet[2977]: I1212 18:46:09.395001 2977 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:46:09.395533 kubelet[2977]: I1212 18:46:09.395425 2977 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:46:09.447053 kubelet[2977]: I1212 18:46:09.446978 2977 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:46:09.449218 kubelet[2977]: E1212 18:46:09.449159 2977 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.25.153:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.25.153:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 18:46:09.481203 kubelet[2977]: I1212 18:46:09.481161 2977 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:46:09.492778 kubelet[2977]: I1212 18:46:09.492717 2977 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:46:09.509378 kubelet[2977]: I1212 18:46:09.509304 2977 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:46:09.513495 kubelet[2977]: I1212 18:46:09.509372 2977 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-153","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:46:09.516970 kubelet[2977]: I1212 18:46:09.516697 2977 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:46:09.516970 kubelet[2977]: I1212 18:46:09.516847 2977 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 18:46:09.517178 kubelet[2977]: I1212 18:46:09.517084 2977 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:46:09.522799 kubelet[2977]: I1212 18:46:09.522518 2977 kubelet.go:480] "Attempting to sync node with API server" Dec 12 18:46:09.522799 kubelet[2977]: I1212 18:46:09.522567 2977 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:46:09.522799 kubelet[2977]: I1212 18:46:09.522592 2977 kubelet.go:386] "Adding apiserver pod source" Dec 12 18:46:09.522799 kubelet[2977]: I1212 18:46:09.522610 2977 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:46:09.529482 kubelet[2977]: E1212 18:46:09.529156 2977 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.25.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-153&limit=500&resourceVersion=0\": dial tcp 172.31.25.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 18:46:09.533387 kubelet[2977]: E1212 18:46:09.533201 2977 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.25.153:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 18:46:09.533776 kubelet[2977]: I1212 18:46:09.533751 2977 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:46:09.534540 kubelet[2977]: I1212 18:46:09.534513 2977 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:46:09.535607 kubelet[2977]: W1212 18:46:09.535578 2977 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 18:46:09.543361 kubelet[2977]: I1212 18:46:09.542626 2977 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:46:09.543361 kubelet[2977]: I1212 18:46:09.542721 2977 server.go:1289] "Started kubelet" Dec 12 18:46:09.550766 kubelet[2977]: I1212 18:46:09.550505 2977 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:46:09.550954 kubelet[2977]: I1212 18:46:09.550909 2977 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:46:09.551083 kubelet[2977]: I1212 18:46:09.551065 2977 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:46:09.559069 kubelet[2977]: I1212 18:46:09.558110 2977 server.go:317] "Adding debug handlers to kubelet server" Dec 12 18:46:09.559330 kubelet[2977]: E1212 18:46:09.554409 2977 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.25.153:6443/api/v1/namespaces/default/events\": dial tcp 172.31.25.153:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-25-153.18808c2ad6649f90 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-153,UID:ip-172-31-25-153,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-153,},FirstTimestamp:2025-12-12 18:46:09.542668176 +0000 UTC m=+1.475693064,LastTimestamp:2025-12-12 18:46:09.542668176 +0000 UTC m=+1.475693064,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-153,}" Dec 12 18:46:09.563014 kubelet[2977]: I1212 18:46:09.562975 2977 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:46:09.568604 kubelet[2977]: I1212 18:46:09.568572 2977 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:46:09.570232 kubelet[2977]: I1212 18:46:09.570199 2977 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:46:09.571351 kubelet[2977]: E1212 18:46:09.571119 2977 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-25-153\" not found" Dec 12 18:46:09.572148 kubelet[2977]: E1212 18:46:09.571975 2977 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:46:09.574571 kubelet[2977]: I1212 18:46:09.574498 2977 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:46:09.574745 kubelet[2977]: I1212 18:46:09.574732 2977 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:46:09.575373 kubelet[2977]: E1212 18:46:09.575340 2977 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-153?timeout=10s\": dial tcp 172.31.25.153:6443: connect: connection refused" interval="200ms" Dec 12 18:46:09.575888 kubelet[2977]: I1212 18:46:09.575855 2977 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:46:09.575970 kubelet[2977]: I1212 18:46:09.575942 2977 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:46:09.576520 kubelet[2977]: E1212 18:46:09.576454 2977 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.25.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 18:46:09.581456 kubelet[2977]: I1212 18:46:09.581422 2977 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:46:09.626063 kubelet[2977]: I1212 18:46:09.625919 2977 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 18:46:09.628142 kubelet[2977]: I1212 18:46:09.628109 2977 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 18:46:09.628704 kubelet[2977]: I1212 18:46:09.628290 2977 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 18:46:09.628704 kubelet[2977]: I1212 18:46:09.628320 2977 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:46:09.628704 kubelet[2977]: I1212 18:46:09.628338 2977 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 18:46:09.628704 kubelet[2977]: E1212 18:46:09.628401 2977 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:46:09.662867 kubelet[2977]: E1212 18:46:09.662772 2977 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.25.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 18:46:09.664865 kubelet[2977]: I1212 18:46:09.664124 2977 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:46:09.664865 kubelet[2977]: I1212 18:46:09.664143 2977 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:46:09.664865 kubelet[2977]: I1212 18:46:09.664168 2977 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:46:09.671455 kubelet[2977]: E1212 18:46:09.671400 2977 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-25-153\" not found" Dec 12 18:46:09.694246 kubelet[2977]: I1212 18:46:09.694185 2977 policy_none.go:49] "None policy: Start" Dec 12 18:46:09.694246 kubelet[2977]: I1212 18:46:09.694231 2977 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:46:09.694246 kubelet[2977]: I1212 18:46:09.694258 2977 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:46:09.707731 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 18:46:09.725429 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 18:46:09.728786 kubelet[2977]: E1212 18:46:09.728757 2977 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 18:46:09.729786 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 18:46:09.742311 kubelet[2977]: E1212 18:46:09.742223 2977 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:46:09.742528 kubelet[2977]: I1212 18:46:09.742460 2977 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:46:09.742528 kubelet[2977]: I1212 18:46:09.742477 2977 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:46:09.743208 kubelet[2977]: I1212 18:46:09.743147 2977 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:46:09.746385 kubelet[2977]: E1212 18:46:09.746280 2977 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:46:09.746546 kubelet[2977]: E1212 18:46:09.746530 2977 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-25-153\" not found" Dec 12 18:46:09.776494 kubelet[2977]: E1212 18:46:09.776444 2977 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-153?timeout=10s\": dial tcp 172.31.25.153:6443: connect: connection refused" interval="400ms" Dec 12 18:46:09.845527 kubelet[2977]: I1212 18:46:09.845486 2977 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-153" Dec 12 18:46:09.845934 kubelet[2977]: E1212 18:46:09.845898 2977 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.153:6443/api/v1/nodes\": dial tcp 172.31.25.153:6443: connect: connection refused" node="ip-172-31-25-153" Dec 12 18:46:09.944646 systemd[1]: Created slice kubepods-burstable-podc6252ae512e89327ce5f49db78ee7f57.slice - libcontainer container kubepods-burstable-podc6252ae512e89327ce5f49db78ee7f57.slice. Dec 12 18:46:09.968095 kubelet[2977]: E1212 18:46:09.968060 2977 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-153\" not found" node="ip-172-31-25-153" Dec 12 18:46:09.972100 systemd[1]: Created slice kubepods-burstable-pod38dbe000c1140eea8af55fb56633ba48.slice - libcontainer container kubepods-burstable-pod38dbe000c1140eea8af55fb56633ba48.slice. Dec 12 18:46:09.977427 kubelet[2977]: I1212 18:46:09.977335 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6252ae512e89327ce5f49db78ee7f57-ca-certs\") pod \"kube-apiserver-ip-172-31-25-153\" (UID: \"c6252ae512e89327ce5f49db78ee7f57\") " pod="kube-system/kube-apiserver-ip-172-31-25-153" Dec 12 18:46:09.977575 kubelet[2977]: I1212 18:46:09.977471 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6252ae512e89327ce5f49db78ee7f57-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-153\" (UID: \"c6252ae512e89327ce5f49db78ee7f57\") " pod="kube-system/kube-apiserver-ip-172-31-25-153" Dec 12 18:46:09.977575 kubelet[2977]: I1212 18:46:09.977495 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6252ae512e89327ce5f49db78ee7f57-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-153\" (UID: \"c6252ae512e89327ce5f49db78ee7f57\") " pod="kube-system/kube-apiserver-ip-172-31-25-153" Dec 12 18:46:09.977575 kubelet[2977]: I1212 18:46:09.977522 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38dbe000c1140eea8af55fb56633ba48-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-153\" (UID: \"38dbe000c1140eea8af55fb56633ba48\") " pod="kube-system/kube-controller-manager-ip-172-31-25-153" Dec 12 18:46:09.977575 kubelet[2977]: I1212 18:46:09.977540 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38dbe000c1140eea8af55fb56633ba48-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-153\" (UID: \"38dbe000c1140eea8af55fb56633ba48\") " pod="kube-system/kube-controller-manager-ip-172-31-25-153" Dec 12 18:46:09.977575 kubelet[2977]: I1212 18:46:09.977562 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a6ae19da33ef33f1dc6e624f00fbd5b3-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-153\" (UID: \"a6ae19da33ef33f1dc6e624f00fbd5b3\") " pod="kube-system/kube-scheduler-ip-172-31-25-153" Dec 12 18:46:09.977708 kubelet[2977]: I1212 18:46:09.977579 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/38dbe000c1140eea8af55fb56633ba48-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-153\" (UID: \"38dbe000c1140eea8af55fb56633ba48\") " pod="kube-system/kube-controller-manager-ip-172-31-25-153" Dec 12 18:46:09.977708 kubelet[2977]: I1212 18:46:09.977595 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38dbe000c1140eea8af55fb56633ba48-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-153\" (UID: \"38dbe000c1140eea8af55fb56633ba48\") " pod="kube-system/kube-controller-manager-ip-172-31-25-153" Dec 12 18:46:09.977708 kubelet[2977]: I1212 18:46:09.977610 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38dbe000c1140eea8af55fb56633ba48-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-153\" (UID: \"38dbe000c1140eea8af55fb56633ba48\") " pod="kube-system/kube-controller-manager-ip-172-31-25-153" Dec 12 18:46:09.980526 kubelet[2977]: E1212 18:46:09.980469 2977 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-153\" not found" node="ip-172-31-25-153" Dec 12 18:46:09.984376 systemd[1]: Created slice kubepods-burstable-poda6ae19da33ef33f1dc6e624f00fbd5b3.slice - libcontainer container kubepods-burstable-poda6ae19da33ef33f1dc6e624f00fbd5b3.slice. Dec 12 18:46:09.986931 kubelet[2977]: E1212 18:46:09.986891 2977 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-153\" not found" node="ip-172-31-25-153" Dec 12 18:46:10.048214 kubelet[2977]: I1212 18:46:10.048175 2977 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-153" Dec 12 18:46:10.048582 kubelet[2977]: E1212 18:46:10.048544 2977 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.153:6443/api/v1/nodes\": dial tcp 172.31.25.153:6443: connect: connection refused" node="ip-172-31-25-153" Dec 12 18:46:10.177284 kubelet[2977]: E1212 18:46:10.177244 2977 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-153?timeout=10s\": dial tcp 172.31.25.153:6443: connect: connection refused" interval="800ms" Dec 12 18:46:10.270523 containerd[1990]: time="2025-12-12T18:46:10.270300108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-153,Uid:c6252ae512e89327ce5f49db78ee7f57,Namespace:kube-system,Attempt:0,}" Dec 12 18:46:10.288217 containerd[1990]: time="2025-12-12T18:46:10.287812871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-153,Uid:38dbe000c1140eea8af55fb56633ba48,Namespace:kube-system,Attempt:0,}" Dec 12 18:46:10.289398 containerd[1990]: time="2025-12-12T18:46:10.289358145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-153,Uid:a6ae19da33ef33f1dc6e624f00fbd5b3,Namespace:kube-system,Attempt:0,}" Dec 12 18:46:10.434080 kubelet[2977]: E1212 18:46:10.433653 2977 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.25.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 18:46:10.443122 containerd[1990]: time="2025-12-12T18:46:10.442565068Z" level=info msg="connecting to shim 5ec62b4aa615eaf44bb62aedbc9f83341229be56f47b0ff640febc1614879951" address="unix:///run/containerd/s/c621fa2434185479eaf81e38f83761a922435dde6da29cd6650acdef7414be2a" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:46:10.446461 containerd[1990]: time="2025-12-12T18:46:10.446398301Z" level=info msg="connecting to shim 82fd0960e38c524f58b250cdc28fd376efab2399e4838a628fa199280d2655af" address="unix:///run/containerd/s/a732c1178cf2372d5dc40773c1e455fb94a10c5d7bf5fe77b64fbd0d9fe9f8ac" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:46:10.455723 containerd[1990]: time="2025-12-12T18:46:10.455670570Z" level=info msg="connecting to shim 4b5574885f72a49dd44e882deab901c8d09d8fc49133f3614002263dc0d91cbd" address="unix:///run/containerd/s/e2e59aba8fdf976d3f10471403ae01a5f712cf2ad4fa331add1d8bd1b092d628" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:46:10.456793 kubelet[2977]: I1212 18:46:10.456747 2977 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-153" Dec 12 18:46:10.457750 kubelet[2977]: E1212 18:46:10.457705 2977 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.153:6443/api/v1/nodes\": dial tcp 172.31.25.153:6443: connect: connection refused" node="ip-172-31-25-153" Dec 12 18:46:10.524433 kubelet[2977]: E1212 18:46:10.524238 2977 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.25.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 18:46:10.591307 systemd[1]: Started cri-containerd-5ec62b4aa615eaf44bb62aedbc9f83341229be56f47b0ff640febc1614879951.scope - libcontainer container 5ec62b4aa615eaf44bb62aedbc9f83341229be56f47b0ff640febc1614879951. Dec 12 18:46:10.595417 systemd[1]: Started cri-containerd-82fd0960e38c524f58b250cdc28fd376efab2399e4838a628fa199280d2655af.scope - libcontainer container 82fd0960e38c524f58b250cdc28fd376efab2399e4838a628fa199280d2655af. Dec 12 18:46:10.600737 systemd[1]: Started cri-containerd-4b5574885f72a49dd44e882deab901c8d09d8fc49133f3614002263dc0d91cbd.scope - libcontainer container 4b5574885f72a49dd44e882deab901c8d09d8fc49133f3614002263dc0d91cbd. Dec 12 18:46:10.623798 kubelet[2977]: E1212 18:46:10.623745 2977 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.25.153:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 18:46:10.718887 containerd[1990]: time="2025-12-12T18:46:10.718814343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-153,Uid:38dbe000c1140eea8af55fb56633ba48,Namespace:kube-system,Attempt:0,} returns sandbox id \"82fd0960e38c524f58b250cdc28fd376efab2399e4838a628fa199280d2655af\"" Dec 12 18:46:10.729929 containerd[1990]: time="2025-12-12T18:46:10.729818109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-153,Uid:a6ae19da33ef33f1dc6e624f00fbd5b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b5574885f72a49dd44e882deab901c8d09d8fc49133f3614002263dc0d91cbd\"" Dec 12 18:46:10.736182 containerd[1990]: time="2025-12-12T18:46:10.736121341Z" level=info msg="CreateContainer within sandbox \"82fd0960e38c524f58b250cdc28fd376efab2399e4838a628fa199280d2655af\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 18:46:10.741311 containerd[1990]: time="2025-12-12T18:46:10.741193828Z" level=info msg="CreateContainer within sandbox \"4b5574885f72a49dd44e882deab901c8d09d8fc49133f3614002263dc0d91cbd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 18:46:10.747510 containerd[1990]: time="2025-12-12T18:46:10.747404380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-153,Uid:c6252ae512e89327ce5f49db78ee7f57,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ec62b4aa615eaf44bb62aedbc9f83341229be56f47b0ff640febc1614879951\"" Dec 12 18:46:10.757013 containerd[1990]: time="2025-12-12T18:46:10.756818115Z" level=info msg="CreateContainer within sandbox \"5ec62b4aa615eaf44bb62aedbc9f83341229be56f47b0ff640febc1614879951\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 18:46:10.765457 kubelet[2977]: E1212 18:46:10.765387 2977 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.25.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-153&limit=500&resourceVersion=0\": dial tcp 172.31.25.153:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 18:46:10.765766 containerd[1990]: time="2025-12-12T18:46:10.765737075Z" level=info msg="Container f217310717c72537c6990e88fc38725edcf16a4b4595c601a6bc4cedf510ed66: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:10.767516 containerd[1990]: time="2025-12-12T18:46:10.767467221Z" level=info msg="Container 16216809fe94cfe4cb73163b5f6fa7e64c5eb9fdb3483ce25b2551d881fdcbbb: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:10.784609 containerd[1990]: time="2025-12-12T18:46:10.784252616Z" level=info msg="Container 51e923db4c40200c88b90b155fbbd796296ba141e08ef53557728e88ce8efa82: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:10.804162 containerd[1990]: time="2025-12-12T18:46:10.804079370Z" level=info msg="CreateContainer within sandbox \"82fd0960e38c524f58b250cdc28fd376efab2399e4838a628fa199280d2655af\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"16216809fe94cfe4cb73163b5f6fa7e64c5eb9fdb3483ce25b2551d881fdcbbb\"" Dec 12 18:46:10.807027 containerd[1990]: time="2025-12-12T18:46:10.806986934Z" level=info msg="CreateContainer within sandbox \"4b5574885f72a49dd44e882deab901c8d09d8fc49133f3614002263dc0d91cbd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f217310717c72537c6990e88fc38725edcf16a4b4595c601a6bc4cedf510ed66\"" Dec 12 18:46:10.808335 containerd[1990]: time="2025-12-12T18:46:10.808262778Z" level=info msg="StartContainer for \"16216809fe94cfe4cb73163b5f6fa7e64c5eb9fdb3483ce25b2551d881fdcbbb\"" Dec 12 18:46:10.811074 containerd[1990]: time="2025-12-12T18:46:10.810922059Z" level=info msg="connecting to shim 16216809fe94cfe4cb73163b5f6fa7e64c5eb9fdb3483ce25b2551d881fdcbbb" address="unix:///run/containerd/s/a732c1178cf2372d5dc40773c1e455fb94a10c5d7bf5fe77b64fbd0d9fe9f8ac" protocol=ttrpc version=3 Dec 12 18:46:10.812300 containerd[1990]: time="2025-12-12T18:46:10.812260688Z" level=info msg="StartContainer for \"f217310717c72537c6990e88fc38725edcf16a4b4595c601a6bc4cedf510ed66\"" Dec 12 18:46:10.816078 containerd[1990]: time="2025-12-12T18:46:10.815628064Z" level=info msg="CreateContainer within sandbox \"5ec62b4aa615eaf44bb62aedbc9f83341229be56f47b0ff640febc1614879951\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"51e923db4c40200c88b90b155fbbd796296ba141e08ef53557728e88ce8efa82\"" Dec 12 18:46:10.817726 containerd[1990]: time="2025-12-12T18:46:10.817683693Z" level=info msg="connecting to shim f217310717c72537c6990e88fc38725edcf16a4b4595c601a6bc4cedf510ed66" address="unix:///run/containerd/s/e2e59aba8fdf976d3f10471403ae01a5f712cf2ad4fa331add1d8bd1b092d628" protocol=ttrpc version=3 Dec 12 18:46:10.818360 containerd[1990]: time="2025-12-12T18:46:10.818191161Z" level=info msg="StartContainer for \"51e923db4c40200c88b90b155fbbd796296ba141e08ef53557728e88ce8efa82\"" Dec 12 18:46:10.822430 containerd[1990]: time="2025-12-12T18:46:10.822309995Z" level=info msg="connecting to shim 51e923db4c40200c88b90b155fbbd796296ba141e08ef53557728e88ce8efa82" address="unix:///run/containerd/s/c621fa2434185479eaf81e38f83761a922435dde6da29cd6650acdef7414be2a" protocol=ttrpc version=3 Dec 12 18:46:10.849262 systemd[1]: Started cri-containerd-16216809fe94cfe4cb73163b5f6fa7e64c5eb9fdb3483ce25b2551d881fdcbbb.scope - libcontainer container 16216809fe94cfe4cb73163b5f6fa7e64c5eb9fdb3483ce25b2551d881fdcbbb. Dec 12 18:46:10.880343 systemd[1]: Started cri-containerd-51e923db4c40200c88b90b155fbbd796296ba141e08ef53557728e88ce8efa82.scope - libcontainer container 51e923db4c40200c88b90b155fbbd796296ba141e08ef53557728e88ce8efa82. Dec 12 18:46:10.890872 systemd[1]: Started cri-containerd-f217310717c72537c6990e88fc38725edcf16a4b4595c601a6bc4cedf510ed66.scope - libcontainer container f217310717c72537c6990e88fc38725edcf16a4b4595c601a6bc4cedf510ed66. Dec 12 18:46:10.978781 kubelet[2977]: E1212 18:46:10.978617 2977 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-153?timeout=10s\": dial tcp 172.31.25.153:6443: connect: connection refused" interval="1.6s" Dec 12 18:46:10.989421 containerd[1990]: time="2025-12-12T18:46:10.989327592Z" level=info msg="StartContainer for \"16216809fe94cfe4cb73163b5f6fa7e64c5eb9fdb3483ce25b2551d881fdcbbb\" returns successfully" Dec 12 18:46:11.021177 containerd[1990]: time="2025-12-12T18:46:11.021011913Z" level=info msg="StartContainer for \"51e923db4c40200c88b90b155fbbd796296ba141e08ef53557728e88ce8efa82\" returns successfully" Dec 12 18:46:11.038005 containerd[1990]: time="2025-12-12T18:46:11.037890206Z" level=info msg="StartContainer for \"f217310717c72537c6990e88fc38725edcf16a4b4595c601a6bc4cedf510ed66\" returns successfully" Dec 12 18:46:11.260352 kubelet[2977]: I1212 18:46:11.260325 2977 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-153" Dec 12 18:46:11.260952 kubelet[2977]: E1212 18:46:11.260802 2977 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.153:6443/api/v1/nodes\": dial tcp 172.31.25.153:6443: connect: connection refused" node="ip-172-31-25-153" Dec 12 18:46:11.460302 kubelet[2977]: E1212 18:46:11.460181 2977 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.25.153:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.25.153:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 18:46:11.685986 kubelet[2977]: E1212 18:46:11.685288 2977 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-153\" not found" node="ip-172-31-25-153" Dec 12 18:46:11.694158 kubelet[2977]: E1212 18:46:11.692380 2977 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-153\" not found" node="ip-172-31-25-153" Dec 12 18:46:11.697694 kubelet[2977]: E1212 18:46:11.697663 2977 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-153\" not found" node="ip-172-31-25-153" Dec 12 18:46:12.573366 update_engine[1967]: I20251212 18:46:12.572654 1967 update_attempter.cc:509] Updating boot flags... Dec 12 18:46:12.708943 kubelet[2977]: E1212 18:46:12.708900 2977 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-153\" not found" node="ip-172-31-25-153" Dec 12 18:46:12.716248 kubelet[2977]: E1212 18:46:12.715439 2977 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-153\" not found" node="ip-172-31-25-153" Dec 12 18:46:12.869343 kubelet[2977]: I1212 18:46:12.869127 2977 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-153" Dec 12 18:46:14.749681 kubelet[2977]: E1212 18:46:14.749639 2977 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-25-153\" not found" node="ip-172-31-25-153" Dec 12 18:46:14.813314 kubelet[2977]: E1212 18:46:14.813277 2977 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-153\" not found" node="ip-172-31-25-153" Dec 12 18:46:14.839630 kubelet[2977]: I1212 18:46:14.839594 2977 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-25-153" Dec 12 18:46:14.839630 kubelet[2977]: E1212 18:46:14.839645 2977 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-25-153\": node \"ip-172-31-25-153\" not found" Dec 12 18:46:14.875516 kubelet[2977]: I1212 18:46:14.875425 2977 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-153" Dec 12 18:46:14.883918 kubelet[2977]: E1212 18:46:14.883880 2977 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-25-153\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-25-153" Dec 12 18:46:14.883918 kubelet[2977]: I1212 18:46:14.883910 2977 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-153" Dec 12 18:46:14.886080 kubelet[2977]: E1212 18:46:14.886026 2977 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-25-153\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-25-153" Dec 12 18:46:14.886080 kubelet[2977]: I1212 18:46:14.886077 2977 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-153" Dec 12 18:46:14.888174 kubelet[2977]: E1212 18:46:14.888133 2977 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-25-153\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-25-153" Dec 12 18:46:15.537524 kubelet[2977]: I1212 18:46:15.537457 2977 apiserver.go:52] "Watching apiserver" Dec 12 18:46:15.575285 kubelet[2977]: I1212 18:46:15.575240 2977 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:46:17.088581 systemd[1]: Reload requested from client PID 3439 ('systemctl') (unit session-9.scope)... Dec 12 18:46:17.088602 systemd[1]: Reloading... Dec 12 18:46:17.235104 zram_generator::config[3483]: No configuration found. Dec 12 18:46:17.523901 systemd[1]: Reloading finished in 434 ms. Dec 12 18:46:17.551542 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:46:17.567781 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 18:46:17.568128 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:46:17.568196 systemd[1]: kubelet.service: Consumed 1.659s CPU time, 128.9M memory peak. Dec 12 18:46:17.574982 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:46:18.007937 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:46:18.019547 (kubelet)[3543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:46:18.096293 kubelet[3543]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:46:18.096293 kubelet[3543]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:46:18.096293 kubelet[3543]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:46:18.096828 kubelet[3543]: I1212 18:46:18.096405 3543 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:46:18.111629 kubelet[3543]: I1212 18:46:18.111585 3543 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 18:46:18.111629 kubelet[3543]: I1212 18:46:18.111614 3543 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:46:18.111917 kubelet[3543]: I1212 18:46:18.111895 3543 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:46:18.113920 kubelet[3543]: I1212 18:46:18.113862 3543 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 18:46:18.118423 kubelet[3543]: I1212 18:46:18.118387 3543 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:46:18.131326 kubelet[3543]: I1212 18:46:18.131298 3543 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:46:18.133982 kubelet[3543]: I1212 18:46:18.133944 3543 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:46:18.135756 kubelet[3543]: I1212 18:46:18.135691 3543 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:46:18.135887 kubelet[3543]: I1212 18:46:18.135730 3543 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-153","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:46:18.135887 kubelet[3543]: I1212 18:46:18.135885 3543 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:46:18.136003 kubelet[3543]: I1212 18:46:18.135896 3543 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 18:46:18.136003 kubelet[3543]: I1212 18:46:18.135945 3543 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:46:18.136122 kubelet[3543]: I1212 18:46:18.136110 3543 kubelet.go:480] "Attempting to sync node with API server" Dec 12 18:46:18.138055 kubelet[3543]: I1212 18:46:18.138016 3543 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:46:18.138233 kubelet[3543]: I1212 18:46:18.138220 3543 kubelet.go:386] "Adding apiserver pod source" Dec 12 18:46:18.138336 kubelet[3543]: I1212 18:46:18.138327 3543 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:46:18.147218 kubelet[3543]: I1212 18:46:18.147176 3543 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:46:18.147926 kubelet[3543]: I1212 18:46:18.147874 3543 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:46:18.152550 kubelet[3543]: I1212 18:46:18.152522 3543 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:46:18.152654 kubelet[3543]: I1212 18:46:18.152587 3543 server.go:1289] "Started kubelet" Dec 12 18:46:18.161873 kubelet[3543]: I1212 18:46:18.161796 3543 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:46:18.163177 kubelet[3543]: I1212 18:46:18.163160 3543 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:46:18.163710 kubelet[3543]: I1212 18:46:18.163648 3543 server.go:317] "Adding debug handlers to kubelet server" Dec 12 18:46:18.174701 kubelet[3543]: I1212 18:46:18.174582 3543 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:46:18.174825 kubelet[3543]: I1212 18:46:18.174773 3543 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:46:18.180299 kubelet[3543]: I1212 18:46:18.180241 3543 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:46:18.184077 kubelet[3543]: I1212 18:46:18.184048 3543 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:46:18.184578 kubelet[3543]: E1212 18:46:18.184544 3543 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-25-153\" not found" Dec 12 18:46:18.197468 kubelet[3543]: I1212 18:46:18.197383 3543 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:46:18.197614 kubelet[3543]: I1212 18:46:18.197552 3543 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:46:18.201136 kubelet[3543]: I1212 18:46:18.201093 3543 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 18:46:18.206359 kubelet[3543]: I1212 18:46:18.206303 3543 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 18:46:18.207106 kubelet[3543]: I1212 18:46:18.206519 3543 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 18:46:18.207106 kubelet[3543]: I1212 18:46:18.206551 3543 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:46:18.207106 kubelet[3543]: I1212 18:46:18.206560 3543 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 18:46:18.207106 kubelet[3543]: E1212 18:46:18.206612 3543 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:46:18.208704 kubelet[3543]: I1212 18:46:18.208283 3543 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:46:18.214773 kubelet[3543]: I1212 18:46:18.214740 3543 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:46:18.214773 kubelet[3543]: I1212 18:46:18.214763 3543 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:46:18.284073 kubelet[3543]: I1212 18:46:18.283749 3543 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:46:18.284073 kubelet[3543]: I1212 18:46:18.283770 3543 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:46:18.284073 kubelet[3543]: I1212 18:46:18.283793 3543 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:46:18.284073 kubelet[3543]: I1212 18:46:18.283978 3543 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 18:46:18.284073 kubelet[3543]: I1212 18:46:18.283991 3543 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 18:46:18.284073 kubelet[3543]: I1212 18:46:18.284015 3543 policy_none.go:49] "None policy: Start" Dec 12 18:46:18.284073 kubelet[3543]: I1212 18:46:18.284029 3543 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:46:18.284416 kubelet[3543]: I1212 18:46:18.284085 3543 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:46:18.284416 kubelet[3543]: I1212 18:46:18.284211 3543 state_mem.go:75] "Updated machine memory state" Dec 12 18:46:18.291133 kubelet[3543]: E1212 18:46:18.290772 3543 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:46:18.293064 kubelet[3543]: I1212 18:46:18.292667 3543 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:46:18.293064 kubelet[3543]: I1212 18:46:18.292687 3543 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:46:18.298191 kubelet[3543]: I1212 18:46:18.298107 3543 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:46:18.302018 kubelet[3543]: E1212 18:46:18.301987 3543 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:46:18.308531 kubelet[3543]: I1212 18:46:18.308178 3543 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-153" Dec 12 18:46:18.310256 kubelet[3543]: I1212 18:46:18.310225 3543 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-153" Dec 12 18:46:18.310440 kubelet[3543]: I1212 18:46:18.310301 3543 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-153" Dec 12 18:46:18.405048 kubelet[3543]: I1212 18:46:18.404820 3543 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-153" Dec 12 18:46:18.415021 kubelet[3543]: I1212 18:46:18.414964 3543 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-25-153" Dec 12 18:46:18.416143 kubelet[3543]: I1212 18:46:18.415219 3543 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-25-153" Dec 12 18:46:18.501688 kubelet[3543]: I1212 18:46:18.501581 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38dbe000c1140eea8af55fb56633ba48-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-153\" (UID: \"38dbe000c1140eea8af55fb56633ba48\") " pod="kube-system/kube-controller-manager-ip-172-31-25-153" Dec 12 18:46:18.501908 kubelet[3543]: I1212 18:46:18.501881 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a6ae19da33ef33f1dc6e624f00fbd5b3-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-153\" (UID: \"a6ae19da33ef33f1dc6e624f00fbd5b3\") " pod="kube-system/kube-scheduler-ip-172-31-25-153" Dec 12 18:46:18.501981 kubelet[3543]: I1212 18:46:18.501946 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/38dbe000c1140eea8af55fb56633ba48-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-153\" (UID: \"38dbe000c1140eea8af55fb56633ba48\") " pod="kube-system/kube-controller-manager-ip-172-31-25-153" Dec 12 18:46:18.502547 kubelet[3543]: I1212 18:46:18.502011 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6252ae512e89327ce5f49db78ee7f57-ca-certs\") pod \"kube-apiserver-ip-172-31-25-153\" (UID: \"c6252ae512e89327ce5f49db78ee7f57\") " pod="kube-system/kube-apiserver-ip-172-31-25-153" Dec 12 18:46:18.502547 kubelet[3543]: I1212 18:46:18.502159 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6252ae512e89327ce5f49db78ee7f57-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-153\" (UID: \"c6252ae512e89327ce5f49db78ee7f57\") " pod="kube-system/kube-apiserver-ip-172-31-25-153" Dec 12 18:46:18.502547 kubelet[3543]: I1212 18:46:18.502220 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6252ae512e89327ce5f49db78ee7f57-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-153\" (UID: \"c6252ae512e89327ce5f49db78ee7f57\") " pod="kube-system/kube-apiserver-ip-172-31-25-153" Dec 12 18:46:18.502547 kubelet[3543]: I1212 18:46:18.502281 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38dbe000c1140eea8af55fb56633ba48-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-153\" (UID: \"38dbe000c1140eea8af55fb56633ba48\") " pod="kube-system/kube-controller-manager-ip-172-31-25-153" Dec 12 18:46:18.502547 kubelet[3543]: I1212 18:46:18.502304 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38dbe000c1140eea8af55fb56633ba48-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-153\" (UID: \"38dbe000c1140eea8af55fb56633ba48\") " pod="kube-system/kube-controller-manager-ip-172-31-25-153" Dec 12 18:46:18.502726 kubelet[3543]: I1212 18:46:18.502355 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38dbe000c1140eea8af55fb56633ba48-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-153\" (UID: \"38dbe000c1140eea8af55fb56633ba48\") " pod="kube-system/kube-controller-manager-ip-172-31-25-153" Dec 12 18:46:19.141024 kubelet[3543]: I1212 18:46:19.140956 3543 apiserver.go:52] "Watching apiserver" Dec 12 18:46:19.197878 kubelet[3543]: I1212 18:46:19.197836 3543 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:46:19.253202 kubelet[3543]: I1212 18:46:19.252447 3543 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-153" Dec 12 18:46:19.261071 kubelet[3543]: E1212 18:46:19.260587 3543 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-25-153\" already exists" pod="kube-system/kube-apiserver-ip-172-31-25-153" Dec 12 18:46:19.289831 kubelet[3543]: I1212 18:46:19.289757 3543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-25-153" podStartSLOduration=1.289737507 podStartE2EDuration="1.289737507s" podCreationTimestamp="2025-12-12 18:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:46:19.289455255 +0000 UTC m=+1.261164531" watchObservedRunningTime="2025-12-12 18:46:19.289737507 +0000 UTC m=+1.261446780" Dec 12 18:46:19.290015 kubelet[3543]: I1212 18:46:19.289888 3543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-25-153" podStartSLOduration=1.2898807460000001 podStartE2EDuration="1.289880746s" podCreationTimestamp="2025-12-12 18:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:46:19.280310893 +0000 UTC m=+1.252020162" watchObservedRunningTime="2025-12-12 18:46:19.289880746 +0000 UTC m=+1.261590020" Dec 12 18:46:19.348006 kubelet[3543]: I1212 18:46:19.347939 3543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-25-153" podStartSLOduration=1.3479131739999999 podStartE2EDuration="1.347913174s" podCreationTimestamp="2025-12-12 18:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:46:19.317719645 +0000 UTC m=+1.289428919" watchObservedRunningTime="2025-12-12 18:46:19.347913174 +0000 UTC m=+1.319622448" Dec 12 18:46:22.283517 kubelet[3543]: I1212 18:46:22.283175 3543 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 18:46:22.289563 containerd[1990]: time="2025-12-12T18:46:22.289518973Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 18:46:22.297020 kubelet[3543]: I1212 18:46:22.294442 3543 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 18:46:23.309640 systemd[1]: Created slice kubepods-besteffort-pod297bfcd7_5052_4ab6_ab7f_c3dc52012fb8.slice - libcontainer container kubepods-besteffort-pod297bfcd7_5052_4ab6_ab7f_c3dc52012fb8.slice. Dec 12 18:46:23.331406 kubelet[3543]: I1212 18:46:23.331365 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/297bfcd7-5052-4ab6-ab7f-c3dc52012fb8-kube-proxy\") pod \"kube-proxy-xrdvj\" (UID: \"297bfcd7-5052-4ab6-ab7f-c3dc52012fb8\") " pod="kube-system/kube-proxy-xrdvj" Dec 12 18:46:23.331406 kubelet[3543]: I1212 18:46:23.331411 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/297bfcd7-5052-4ab6-ab7f-c3dc52012fb8-xtables-lock\") pod \"kube-proxy-xrdvj\" (UID: \"297bfcd7-5052-4ab6-ab7f-c3dc52012fb8\") " pod="kube-system/kube-proxy-xrdvj" Dec 12 18:46:23.331406 kubelet[3543]: I1212 18:46:23.331440 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/297bfcd7-5052-4ab6-ab7f-c3dc52012fb8-lib-modules\") pod \"kube-proxy-xrdvj\" (UID: \"297bfcd7-5052-4ab6-ab7f-c3dc52012fb8\") " pod="kube-system/kube-proxy-xrdvj" Dec 12 18:46:23.332015 kubelet[3543]: I1212 18:46:23.331462 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsrgn\" (UniqueName: \"kubernetes.io/projected/297bfcd7-5052-4ab6-ab7f-c3dc52012fb8-kube-api-access-nsrgn\") pod \"kube-proxy-xrdvj\" (UID: \"297bfcd7-5052-4ab6-ab7f-c3dc52012fb8\") " pod="kube-system/kube-proxy-xrdvj" Dec 12 18:46:23.560161 systemd[1]: Created slice kubepods-besteffort-podc914c620_78a5_4f4f_8ece_7e22d006e732.slice - libcontainer container kubepods-besteffort-podc914c620_78a5_4f4f_8ece_7e22d006e732.slice. Dec 12 18:46:23.624564 containerd[1990]: time="2025-12-12T18:46:23.624447058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xrdvj,Uid:297bfcd7-5052-4ab6-ab7f-c3dc52012fb8,Namespace:kube-system,Attempt:0,}" Dec 12 18:46:23.634602 kubelet[3543]: I1212 18:46:23.634487 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c914c620-78a5-4f4f-8ece-7e22d006e732-var-lib-calico\") pod \"tigera-operator-7dcd859c48-zp2b6\" (UID: \"c914c620-78a5-4f4f-8ece-7e22d006e732\") " pod="tigera-operator/tigera-operator-7dcd859c48-zp2b6" Dec 12 18:46:23.634869 kubelet[3543]: I1212 18:46:23.634833 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmzlz\" (UniqueName: \"kubernetes.io/projected/c914c620-78a5-4f4f-8ece-7e22d006e732-kube-api-access-lmzlz\") pod \"tigera-operator-7dcd859c48-zp2b6\" (UID: \"c914c620-78a5-4f4f-8ece-7e22d006e732\") " pod="tigera-operator/tigera-operator-7dcd859c48-zp2b6" Dec 12 18:46:23.665646 containerd[1990]: time="2025-12-12T18:46:23.665587151Z" level=info msg="connecting to shim 313defebfdcdc1973937b791007449e7b069de0de1bdf7de8e6e0b2c407689d9" address="unix:///run/containerd/s/c2c9977dbebdc911b7be37865b690370805798746e185fecf42933c8e670aaa1" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:46:23.699368 systemd[1]: Started cri-containerd-313defebfdcdc1973937b791007449e7b069de0de1bdf7de8e6e0b2c407689d9.scope - libcontainer container 313defebfdcdc1973937b791007449e7b069de0de1bdf7de8e6e0b2c407689d9. Dec 12 18:46:23.738174 containerd[1990]: time="2025-12-12T18:46:23.738027062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xrdvj,Uid:297bfcd7-5052-4ab6-ab7f-c3dc52012fb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"313defebfdcdc1973937b791007449e7b069de0de1bdf7de8e6e0b2c407689d9\"" Dec 12 18:46:23.745700 containerd[1990]: time="2025-12-12T18:46:23.745452442Z" level=info msg="CreateContainer within sandbox \"313defebfdcdc1973937b791007449e7b069de0de1bdf7de8e6e0b2c407689d9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 18:46:23.771094 containerd[1990]: time="2025-12-12T18:46:23.770172723Z" level=info msg="Container c25b7cf6f6d8e0a96ff850356290eb514ad27d5c00a3d7d873a1bbcf0614a99c: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:23.778975 containerd[1990]: time="2025-12-12T18:46:23.778856691Z" level=info msg="CreateContainer within sandbox \"313defebfdcdc1973937b791007449e7b069de0de1bdf7de8e6e0b2c407689d9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c25b7cf6f6d8e0a96ff850356290eb514ad27d5c00a3d7d873a1bbcf0614a99c\"" Dec 12 18:46:23.781620 containerd[1990]: time="2025-12-12T18:46:23.781583988Z" level=info msg="StartContainer for \"c25b7cf6f6d8e0a96ff850356290eb514ad27d5c00a3d7d873a1bbcf0614a99c\"" Dec 12 18:46:23.783693 containerd[1990]: time="2025-12-12T18:46:23.783626885Z" level=info msg="connecting to shim c25b7cf6f6d8e0a96ff850356290eb514ad27d5c00a3d7d873a1bbcf0614a99c" address="unix:///run/containerd/s/c2c9977dbebdc911b7be37865b690370805798746e185fecf42933c8e670aaa1" protocol=ttrpc version=3 Dec 12 18:46:23.823285 systemd[1]: Started cri-containerd-c25b7cf6f6d8e0a96ff850356290eb514ad27d5c00a3d7d873a1bbcf0614a99c.scope - libcontainer container c25b7cf6f6d8e0a96ff850356290eb514ad27d5c00a3d7d873a1bbcf0614a99c. Dec 12 18:46:23.867374 containerd[1990]: time="2025-12-12T18:46:23.867332569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-zp2b6,Uid:c914c620-78a5-4f4f-8ece-7e22d006e732,Namespace:tigera-operator,Attempt:0,}" Dec 12 18:46:23.894527 containerd[1990]: time="2025-12-12T18:46:23.894382269Z" level=info msg="connecting to shim cdd4ebb0d862b64e48e6c949dd1a6ad5d946684995a342bab0c83190609397ab" address="unix:///run/containerd/s/729651bdf164fb477c10c20dcb4bc6db11a1cf9aae6bc7c117f4d584b077c84d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:46:23.934530 systemd[1]: Started cri-containerd-cdd4ebb0d862b64e48e6c949dd1a6ad5d946684995a342bab0c83190609397ab.scope - libcontainer container cdd4ebb0d862b64e48e6c949dd1a6ad5d946684995a342bab0c83190609397ab. Dec 12 18:46:23.935821 containerd[1990]: time="2025-12-12T18:46:23.935116028Z" level=info msg="StartContainer for \"c25b7cf6f6d8e0a96ff850356290eb514ad27d5c00a3d7d873a1bbcf0614a99c\" returns successfully" Dec 12 18:46:23.997884 containerd[1990]: time="2025-12-12T18:46:23.997809035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-zp2b6,Uid:c914c620-78a5-4f4f-8ece-7e22d006e732,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cdd4ebb0d862b64e48e6c949dd1a6ad5d946684995a342bab0c83190609397ab\"" Dec 12 18:46:24.000426 containerd[1990]: time="2025-12-12T18:46:24.000384788Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 18:46:24.296579 kubelet[3543]: I1212 18:46:24.296024 3543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xrdvj" podStartSLOduration=1.296007628 podStartE2EDuration="1.296007628s" podCreationTimestamp="2025-12-12 18:46:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:46:24.285228517 +0000 UTC m=+6.256937793" watchObservedRunningTime="2025-12-12 18:46:24.296007628 +0000 UTC m=+6.267716901" Dec 12 18:46:24.451373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1036024117.mount: Deactivated successfully. Dec 12 18:46:26.151454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1278486465.mount: Deactivated successfully. Dec 12 18:46:26.863849 containerd[1990]: time="2025-12-12T18:46:26.863731070Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:26.875388 containerd[1990]: time="2025-12-12T18:46:26.875341978Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Dec 12 18:46:26.876995 containerd[1990]: time="2025-12-12T18:46:26.876277462Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:26.879662 containerd[1990]: time="2025-12-12T18:46:26.879597859Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:26.880142 containerd[1990]: time="2025-12-12T18:46:26.880052653Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.879607182s" Dec 12 18:46:26.880142 containerd[1990]: time="2025-12-12T18:46:26.880083298Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 12 18:46:26.885511 containerd[1990]: time="2025-12-12T18:46:26.885474962Z" level=info msg="CreateContainer within sandbox \"cdd4ebb0d862b64e48e6c949dd1a6ad5d946684995a342bab0c83190609397ab\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 18:46:26.899061 containerd[1990]: time="2025-12-12T18:46:26.897648117Z" level=info msg="Container ccb4a587093c096c45b81585a4a583a4f062bb78afdbcf166b934575d7f951e4: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:26.902883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount170045136.mount: Deactivated successfully. Dec 12 18:46:26.907863 containerd[1990]: time="2025-12-12T18:46:26.907812382Z" level=info msg="CreateContainer within sandbox \"cdd4ebb0d862b64e48e6c949dd1a6ad5d946684995a342bab0c83190609397ab\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ccb4a587093c096c45b81585a4a583a4f062bb78afdbcf166b934575d7f951e4\"" Dec 12 18:46:26.908700 containerd[1990]: time="2025-12-12T18:46:26.908650057Z" level=info msg="StartContainer for \"ccb4a587093c096c45b81585a4a583a4f062bb78afdbcf166b934575d7f951e4\"" Dec 12 18:46:26.910572 containerd[1990]: time="2025-12-12T18:46:26.910528248Z" level=info msg="connecting to shim ccb4a587093c096c45b81585a4a583a4f062bb78afdbcf166b934575d7f951e4" address="unix:///run/containerd/s/729651bdf164fb477c10c20dcb4bc6db11a1cf9aae6bc7c117f4d584b077c84d" protocol=ttrpc version=3 Dec 12 18:46:26.934288 systemd[1]: Started cri-containerd-ccb4a587093c096c45b81585a4a583a4f062bb78afdbcf166b934575d7f951e4.scope - libcontainer container ccb4a587093c096c45b81585a4a583a4f062bb78afdbcf166b934575d7f951e4. Dec 12 18:46:26.999484 containerd[1990]: time="2025-12-12T18:46:26.999282039Z" level=info msg="StartContainer for \"ccb4a587093c096c45b81585a4a583a4f062bb78afdbcf166b934575d7f951e4\" returns successfully" Dec 12 18:46:33.918467 sudo[2393]: pam_unix(sudo:session): session closed for user root Dec 12 18:46:33.942135 sshd[2392]: Connection closed by 139.178.89.65 port 34300 Dec 12 18:46:33.943977 sshd-session[2389]: pam_unix(sshd:session): session closed for user core Dec 12 18:46:33.955637 systemd-logind[1966]: Session 9 logged out. Waiting for processes to exit. Dec 12 18:46:33.956599 systemd[1]: sshd@8-172.31.25.153:22-139.178.89.65:34300.service: Deactivated successfully. Dec 12 18:46:33.963644 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 18:46:33.965023 systemd[1]: session-9.scope: Consumed 5.734s CPU time, 155.6M memory peak. Dec 12 18:46:33.973352 systemd-logind[1966]: Removed session 9. Dec 12 18:46:40.178989 kubelet[3543]: I1212 18:46:40.178901 3543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-zp2b6" podStartSLOduration=14.297452716 podStartE2EDuration="17.178878504s" podCreationTimestamp="2025-12-12 18:46:23 +0000 UTC" firstStartedPulling="2025-12-12 18:46:23.99976821 +0000 UTC m=+5.971477464" lastFinishedPulling="2025-12-12 18:46:26.881193998 +0000 UTC m=+8.852903252" observedRunningTime="2025-12-12 18:46:27.306758286 +0000 UTC m=+9.278467561" watchObservedRunningTime="2025-12-12 18:46:40.178878504 +0000 UTC m=+22.150587791" Dec 12 18:46:40.200333 systemd[1]: Created slice kubepods-besteffort-poddd0c3372_14d3_4df3_a888_c1710c8bd6e5.slice - libcontainer container kubepods-besteffort-poddd0c3372_14d3_4df3_a888_c1710c8bd6e5.slice. Dec 12 18:46:40.342749 kubelet[3543]: I1212 18:46:40.342695 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd0c3372-14d3-4df3-a888-c1710c8bd6e5-tigera-ca-bundle\") pod \"calico-typha-5fbc8664c6-fqptg\" (UID: \"dd0c3372-14d3-4df3-a888-c1710c8bd6e5\") " pod="calico-system/calico-typha-5fbc8664c6-fqptg" Dec 12 18:46:40.342749 kubelet[3543]: I1212 18:46:40.342754 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dd0c3372-14d3-4df3-a888-c1710c8bd6e5-typha-certs\") pod \"calico-typha-5fbc8664c6-fqptg\" (UID: \"dd0c3372-14d3-4df3-a888-c1710c8bd6e5\") " pod="calico-system/calico-typha-5fbc8664c6-fqptg" Dec 12 18:46:40.342749 kubelet[3543]: I1212 18:46:40.342785 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7xtk\" (UniqueName: \"kubernetes.io/projected/dd0c3372-14d3-4df3-a888-c1710c8bd6e5-kube-api-access-d7xtk\") pod \"calico-typha-5fbc8664c6-fqptg\" (UID: \"dd0c3372-14d3-4df3-a888-c1710c8bd6e5\") " pod="calico-system/calico-typha-5fbc8664c6-fqptg" Dec 12 18:46:40.452979 systemd[1]: Created slice kubepods-besteffort-podb4b2fa03_56f3_4d0f_8ff2_0590f589f8b2.slice - libcontainer container kubepods-besteffort-podb4b2fa03_56f3_4d0f_8ff2_0590f589f8b2.slice. Dec 12 18:46:40.506857 containerd[1990]: time="2025-12-12T18:46:40.506392939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fbc8664c6-fqptg,Uid:dd0c3372-14d3-4df3-a888-c1710c8bd6e5,Namespace:calico-system,Attempt:0,}" Dec 12 18:46:40.538797 containerd[1990]: time="2025-12-12T18:46:40.538255555Z" level=info msg="connecting to shim 4aae574f170beebc45c271c4103fd914ea973944c7bccad63cc8981ed3db092d" address="unix:///run/containerd/s/35eb5f66ca32f65b999ed94085f423ae8f594a5b0a5f7e62bc4ac471fb79ba11" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:46:40.544226 kubelet[3543]: I1212 18:46:40.544152 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2-flexvol-driver-host\") pod \"calico-node-9fzlv\" (UID: \"b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2\") " pod="calico-system/calico-node-9fzlv" Dec 12 18:46:40.544431 kubelet[3543]: I1212 18:46:40.544411 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2-var-lib-calico\") pod \"calico-node-9fzlv\" (UID: \"b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2\") " pod="calico-system/calico-node-9fzlv" Dec 12 18:46:40.546207 kubelet[3543]: I1212 18:46:40.546166 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2-var-run-calico\") pod \"calico-node-9fzlv\" (UID: \"b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2\") " pod="calico-system/calico-node-9fzlv" Dec 12 18:46:40.546465 kubelet[3543]: I1212 18:46:40.546401 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2-xtables-lock\") pod \"calico-node-9fzlv\" (UID: \"b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2\") " pod="calico-system/calico-node-9fzlv" Dec 12 18:46:40.547247 kubelet[3543]: I1212 18:46:40.547220 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2-cni-net-dir\") pod \"calico-node-9fzlv\" (UID: \"b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2\") " pod="calico-system/calico-node-9fzlv" Dec 12 18:46:40.547444 kubelet[3543]: I1212 18:46:40.547397 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2-lib-modules\") pod \"calico-node-9fzlv\" (UID: \"b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2\") " pod="calico-system/calico-node-9fzlv" Dec 12 18:46:40.549209 kubelet[3543]: I1212 18:46:40.547431 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2-node-certs\") pod \"calico-node-9fzlv\" (UID: \"b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2\") " pod="calico-system/calico-node-9fzlv" Dec 12 18:46:40.549362 kubelet[3543]: I1212 18:46:40.549323 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2-tigera-ca-bundle\") pod \"calico-node-9fzlv\" (UID: \"b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2\") " pod="calico-system/calico-node-9fzlv" Dec 12 18:46:40.549490 kubelet[3543]: I1212 18:46:40.549476 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2-cni-bin-dir\") pod \"calico-node-9fzlv\" (UID: \"b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2\") " pod="calico-system/calico-node-9fzlv" Dec 12 18:46:40.549639 kubelet[3543]: I1212 18:46:40.549627 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2-cni-log-dir\") pod \"calico-node-9fzlv\" (UID: \"b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2\") " pod="calico-system/calico-node-9fzlv" Dec 12 18:46:40.549769 kubelet[3543]: I1212 18:46:40.549754 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2-policysync\") pod \"calico-node-9fzlv\" (UID: \"b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2\") " pod="calico-system/calico-node-9fzlv" Dec 12 18:46:40.549910 kubelet[3543]: I1212 18:46:40.549896 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9lv9\" (UniqueName: \"kubernetes.io/projected/b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2-kube-api-access-c9lv9\") pod \"calico-node-9fzlv\" (UID: \"b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2\") " pod="calico-system/calico-node-9fzlv" Dec 12 18:46:40.581391 systemd[1]: Started cri-containerd-4aae574f170beebc45c271c4103fd914ea973944c7bccad63cc8981ed3db092d.scope - libcontainer container 4aae574f170beebc45c271c4103fd914ea973944c7bccad63cc8981ed3db092d. Dec 12 18:46:40.675199 kubelet[3543]: E1212 18:46:40.673431 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8rtgc" podUID="c534bc62-f909-4723-a1ce-dd8a325ef04d" Dec 12 18:46:40.697268 kubelet[3543]: E1212 18:46:40.697224 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.697690 kubelet[3543]: W1212 18:46:40.697666 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.697839 kubelet[3543]: E1212 18:46:40.697824 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.719195 containerd[1990]: time="2025-12-12T18:46:40.718436982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fbc8664c6-fqptg,Uid:dd0c3372-14d3-4df3-a888-c1710c8bd6e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"4aae574f170beebc45c271c4103fd914ea973944c7bccad63cc8981ed3db092d\"" Dec 12 18:46:40.725632 containerd[1990]: time="2025-12-12T18:46:40.725583793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 18:46:40.767312 kubelet[3543]: E1212 18:46:40.767242 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.767312 kubelet[3543]: W1212 18:46:40.767271 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.767312 kubelet[3543]: E1212 18:46:40.767297 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.767875 kubelet[3543]: E1212 18:46:40.767584 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.767875 kubelet[3543]: W1212 18:46:40.767595 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.767875 kubelet[3543]: E1212 18:46:40.767611 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.767875 kubelet[3543]: E1212 18:46:40.767821 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.767875 kubelet[3543]: W1212 18:46:40.767831 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.767875 kubelet[3543]: E1212 18:46:40.767844 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.768762 containerd[1990]: time="2025-12-12T18:46:40.768627861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9fzlv,Uid:b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2,Namespace:calico-system,Attempt:0,}" Dec 12 18:46:40.768928 kubelet[3543]: E1212 18:46:40.768908 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.768928 kubelet[3543]: W1212 18:46:40.768924 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.769162 kubelet[3543]: E1212 18:46:40.768940 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.769557 kubelet[3543]: E1212 18:46:40.769538 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.769557 kubelet[3543]: W1212 18:46:40.769556 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.769697 kubelet[3543]: E1212 18:46:40.769571 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.770595 kubelet[3543]: E1212 18:46:40.770152 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.770595 kubelet[3543]: W1212 18:46:40.770168 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.770595 kubelet[3543]: E1212 18:46:40.770182 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.771084 kubelet[3543]: E1212 18:46:40.770894 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.771084 kubelet[3543]: W1212 18:46:40.770934 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.771084 kubelet[3543]: E1212 18:46:40.770949 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.771812 kubelet[3543]: E1212 18:46:40.771395 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.771812 kubelet[3543]: W1212 18:46:40.771410 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.771812 kubelet[3543]: E1212 18:46:40.771423 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.771812 kubelet[3543]: E1212 18:46:40.771632 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.771812 kubelet[3543]: W1212 18:46:40.771660 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.771812 kubelet[3543]: E1212 18:46:40.771672 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.772487 kubelet[3543]: E1212 18:46:40.772406 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.772487 kubelet[3543]: W1212 18:46:40.772421 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.772487 kubelet[3543]: E1212 18:46:40.772434 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.773159 kubelet[3543]: E1212 18:46:40.773079 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.773159 kubelet[3543]: W1212 18:46:40.773094 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.773159 kubelet[3543]: E1212 18:46:40.773107 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.775737 kubelet[3543]: E1212 18:46:40.775212 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.775737 kubelet[3543]: W1212 18:46:40.775228 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.775737 kubelet[3543]: E1212 18:46:40.775250 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.775737 kubelet[3543]: E1212 18:46:40.775490 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.775737 kubelet[3543]: W1212 18:46:40.775500 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.775737 kubelet[3543]: E1212 18:46:40.775512 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.776692 kubelet[3543]: E1212 18:46:40.776673 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.776795 kubelet[3543]: W1212 18:46:40.776782 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.776874 kubelet[3543]: E1212 18:46:40.776863 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.777401 kubelet[3543]: E1212 18:46:40.777387 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.777663 kubelet[3543]: W1212 18:46:40.777638 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.778640 kubelet[3543]: E1212 18:46:40.778619 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.781029 kubelet[3543]: E1212 18:46:40.780855 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.781029 kubelet[3543]: W1212 18:46:40.780877 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.781029 kubelet[3543]: E1212 18:46:40.780899 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.781416 kubelet[3543]: E1212 18:46:40.781285 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.781416 kubelet[3543]: W1212 18:46:40.781299 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.781416 kubelet[3543]: E1212 18:46:40.781316 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.781799 kubelet[3543]: E1212 18:46:40.781608 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.781799 kubelet[3543]: W1212 18:46:40.781618 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.781799 kubelet[3543]: E1212 18:46:40.781627 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.782053 kubelet[3543]: E1212 18:46:40.781964 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.782053 kubelet[3543]: W1212 18:46:40.781978 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.782053 kubelet[3543]: E1212 18:46:40.781990 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.782573 kubelet[3543]: E1212 18:46:40.782558 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.782662 kubelet[3543]: W1212 18:46:40.782650 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.782790 kubelet[3543]: E1212 18:46:40.782770 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.846104 containerd[1990]: time="2025-12-12T18:46:40.845709356Z" level=info msg="connecting to shim f2b735d0fa846ea722d8a24b6208684e12b483b9d46fff6d4bb67d5d79b86225" address="unix:///run/containerd/s/2772d8c203de0dc6ba196dcea44dfdceecbca15135e5859f5d7213c6cf517995" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:46:40.852631 kubelet[3543]: E1212 18:46:40.852390 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.852631 kubelet[3543]: W1212 18:46:40.852412 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.852631 kubelet[3543]: E1212 18:46:40.852432 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.852631 kubelet[3543]: I1212 18:46:40.852478 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c534bc62-f909-4723-a1ce-dd8a325ef04d-registration-dir\") pod \"csi-node-driver-8rtgc\" (UID: \"c534bc62-f909-4723-a1ce-dd8a325ef04d\") " pod="calico-system/csi-node-driver-8rtgc" Dec 12 18:46:40.854096 kubelet[3543]: E1212 18:46:40.853025 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.854096 kubelet[3543]: W1212 18:46:40.853099 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.854096 kubelet[3543]: E1212 18:46:40.853119 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.854096 kubelet[3543]: I1212 18:46:40.853361 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sckx2\" (UniqueName: \"kubernetes.io/projected/c534bc62-f909-4723-a1ce-dd8a325ef04d-kube-api-access-sckx2\") pod \"csi-node-driver-8rtgc\" (UID: \"c534bc62-f909-4723-a1ce-dd8a325ef04d\") " pod="calico-system/csi-node-driver-8rtgc" Dec 12 18:46:40.854096 kubelet[3543]: E1212 18:46:40.853482 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.854096 kubelet[3543]: W1212 18:46:40.853509 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.854096 kubelet[3543]: E1212 18:46:40.853526 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.854096 kubelet[3543]: E1212 18:46:40.853797 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.854096 kubelet[3543]: W1212 18:46:40.853809 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.855382 kubelet[3543]: E1212 18:46:40.853838 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.855382 kubelet[3543]: E1212 18:46:40.855116 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.855382 kubelet[3543]: W1212 18:46:40.855135 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.855382 kubelet[3543]: E1212 18:46:40.855152 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.855714 kubelet[3543]: I1212 18:46:40.855687 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c534bc62-f909-4723-a1ce-dd8a325ef04d-kubelet-dir\") pod \"csi-node-driver-8rtgc\" (UID: \"c534bc62-f909-4723-a1ce-dd8a325ef04d\") " pod="calico-system/csi-node-driver-8rtgc" Dec 12 18:46:40.856014 kubelet[3543]: E1212 18:46:40.855949 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.856014 kubelet[3543]: W1212 18:46:40.855985 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.856264 kubelet[3543]: E1212 18:46:40.856157 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.856535 kubelet[3543]: E1212 18:46:40.856522 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.856629 kubelet[3543]: W1212 18:46:40.856617 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.856731 kubelet[3543]: E1212 18:46:40.856718 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.857670 kubelet[3543]: E1212 18:46:40.857496 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.857670 kubelet[3543]: W1212 18:46:40.857532 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.857670 kubelet[3543]: E1212 18:46:40.857547 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.858214 kubelet[3543]: E1212 18:46:40.858025 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.858214 kubelet[3543]: W1212 18:46:40.858181 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.858214 kubelet[3543]: E1212 18:46:40.858199 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.859299 kubelet[3543]: E1212 18:46:40.859180 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.859299 kubelet[3543]: W1212 18:46:40.859197 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.859299 kubelet[3543]: E1212 18:46:40.859222 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.859299 kubelet[3543]: I1212 18:46:40.859269 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c534bc62-f909-4723-a1ce-dd8a325ef04d-varrun\") pod \"csi-node-driver-8rtgc\" (UID: \"c534bc62-f909-4723-a1ce-dd8a325ef04d\") " pod="calico-system/csi-node-driver-8rtgc" Dec 12 18:46:40.860281 kubelet[3543]: E1212 18:46:40.860175 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.860281 kubelet[3543]: W1212 18:46:40.860245 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.860281 kubelet[3543]: E1212 18:46:40.860264 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.861086 kubelet[3543]: I1212 18:46:40.860577 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c534bc62-f909-4723-a1ce-dd8a325ef04d-socket-dir\") pod \"csi-node-driver-8rtgc\" (UID: \"c534bc62-f909-4723-a1ce-dd8a325ef04d\") " pod="calico-system/csi-node-driver-8rtgc" Dec 12 18:46:40.861389 kubelet[3543]: E1212 18:46:40.861334 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.861652 kubelet[3543]: W1212 18:46:40.861555 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.861652 kubelet[3543]: E1212 18:46:40.861631 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.863626 kubelet[3543]: E1212 18:46:40.863084 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.863626 kubelet[3543]: W1212 18:46:40.863102 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.863626 kubelet[3543]: E1212 18:46:40.863119 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.864108 kubelet[3543]: E1212 18:46:40.863885 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.864108 kubelet[3543]: W1212 18:46:40.863909 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.864108 kubelet[3543]: E1212 18:46:40.863943 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.864925 kubelet[3543]: E1212 18:46:40.864840 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.864925 kubelet[3543]: W1212 18:46:40.864870 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.864925 kubelet[3543]: E1212 18:46:40.864885 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.892611 systemd[1]: Started cri-containerd-f2b735d0fa846ea722d8a24b6208684e12b483b9d46fff6d4bb67d5d79b86225.scope - libcontainer container f2b735d0fa846ea722d8a24b6208684e12b483b9d46fff6d4bb67d5d79b86225. Dec 12 18:46:40.936621 containerd[1990]: time="2025-12-12T18:46:40.936430582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9fzlv,Uid:b4b2fa03-56f3-4d0f-8ff2-0590f589f8b2,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2b735d0fa846ea722d8a24b6208684e12b483b9d46fff6d4bb67d5d79b86225\"" Dec 12 18:46:40.962434 kubelet[3543]: E1212 18:46:40.962399 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.962434 kubelet[3543]: W1212 18:46:40.962422 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.962434 kubelet[3543]: E1212 18:46:40.962446 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.962802 kubelet[3543]: E1212 18:46:40.962779 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.962802 kubelet[3543]: W1212 18:46:40.962797 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.962931 kubelet[3543]: E1212 18:46:40.962813 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.963491 kubelet[3543]: E1212 18:46:40.963462 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.963491 kubelet[3543]: W1212 18:46:40.963482 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.963491 kubelet[3543]: E1212 18:46:40.963500 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.965153 kubelet[3543]: E1212 18:46:40.965100 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.965153 kubelet[3543]: W1212 18:46:40.965118 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.965153 kubelet[3543]: E1212 18:46:40.965134 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.965538 kubelet[3543]: E1212 18:46:40.965389 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.965538 kubelet[3543]: W1212 18:46:40.965403 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.965538 kubelet[3543]: E1212 18:46:40.965425 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.968311 kubelet[3543]: E1212 18:46:40.967824 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.968311 kubelet[3543]: W1212 18:46:40.967845 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.968311 kubelet[3543]: E1212 18:46:40.967965 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.968941 kubelet[3543]: E1212 18:46:40.968658 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.968941 kubelet[3543]: W1212 18:46:40.968671 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.968941 kubelet[3543]: E1212 18:46:40.968791 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.971778 kubelet[3543]: E1212 18:46:40.969541 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.971778 kubelet[3543]: W1212 18:46:40.969556 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.971778 kubelet[3543]: E1212 18:46:40.969579 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.971778 kubelet[3543]: E1212 18:46:40.970091 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.971778 kubelet[3543]: W1212 18:46:40.970104 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.971778 kubelet[3543]: E1212 18:46:40.970212 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.976526 kubelet[3543]: E1212 18:46:40.973325 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.976526 kubelet[3543]: W1212 18:46:40.973343 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.976526 kubelet[3543]: E1212 18:46:40.973363 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.976526 kubelet[3543]: E1212 18:46:40.973580 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.976526 kubelet[3543]: W1212 18:46:40.973590 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.976526 kubelet[3543]: E1212 18:46:40.973602 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.976526 kubelet[3543]: E1212 18:46:40.975862 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.976526 kubelet[3543]: W1212 18:46:40.975882 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.976526 kubelet[3543]: E1212 18:46:40.975902 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.976526 kubelet[3543]: E1212 18:46:40.976242 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.978672 kubelet[3543]: W1212 18:46:40.976257 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.978672 kubelet[3543]: E1212 18:46:40.976280 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.978672 kubelet[3543]: E1212 18:46:40.976511 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.978672 kubelet[3543]: W1212 18:46:40.976529 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.978672 kubelet[3543]: E1212 18:46:40.976541 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.978672 kubelet[3543]: E1212 18:46:40.976723 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.978672 kubelet[3543]: W1212 18:46:40.976732 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.978672 kubelet[3543]: E1212 18:46:40.976743 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.978672 kubelet[3543]: E1212 18:46:40.976922 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.978672 kubelet[3543]: W1212 18:46:40.976931 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.985392 kubelet[3543]: E1212 18:46:40.976943 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.985392 kubelet[3543]: E1212 18:46:40.979275 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.985392 kubelet[3543]: W1212 18:46:40.979296 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.985392 kubelet[3543]: E1212 18:46:40.979319 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.985392 kubelet[3543]: E1212 18:46:40.983228 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.985392 kubelet[3543]: W1212 18:46:40.983250 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.985392 kubelet[3543]: E1212 18:46:40.983288 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.985392 kubelet[3543]: E1212 18:46:40.983537 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.985392 kubelet[3543]: W1212 18:46:40.983549 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.985392 kubelet[3543]: E1212 18:46:40.983574 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.985794 kubelet[3543]: E1212 18:46:40.983739 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.985794 kubelet[3543]: W1212 18:46:40.983747 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.985794 kubelet[3543]: E1212 18:46:40.983764 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.985794 kubelet[3543]: E1212 18:46:40.984074 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.985794 kubelet[3543]: W1212 18:46:40.984095 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.985794 kubelet[3543]: E1212 18:46:40.984109 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.985794 kubelet[3543]: E1212 18:46:40.984430 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.985794 kubelet[3543]: W1212 18:46:40.984442 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.985794 kubelet[3543]: E1212 18:46:40.984454 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.985794 kubelet[3543]: E1212 18:46:40.984949 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.987372 kubelet[3543]: W1212 18:46:40.984962 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.987372 kubelet[3543]: E1212 18:46:40.984976 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.987372 kubelet[3543]: E1212 18:46:40.985216 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.987372 kubelet[3543]: W1212 18:46:40.985226 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.987372 kubelet[3543]: E1212 18:46:40.985240 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.987372 kubelet[3543]: E1212 18:46:40.986586 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.987372 kubelet[3543]: W1212 18:46:40.986601 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.987372 kubelet[3543]: E1212 18:46:40.986617 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:40.997139 kubelet[3543]: E1212 18:46:40.997102 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:40.997139 kubelet[3543]: W1212 18:46:40.997129 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:40.997347 kubelet[3543]: E1212 18:46:40.997153 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:42.192422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3226118030.mount: Deactivated successfully. Dec 12 18:46:42.209066 kubelet[3543]: E1212 18:46:42.208920 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8rtgc" podUID="c534bc62-f909-4723-a1ce-dd8a325ef04d" Dec 12 18:46:43.576583 containerd[1990]: time="2025-12-12T18:46:43.576521883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:43.577680 containerd[1990]: time="2025-12-12T18:46:43.577511191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Dec 12 18:46:43.578602 containerd[1990]: time="2025-12-12T18:46:43.578567740Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:43.581123 containerd[1990]: time="2025-12-12T18:46:43.581088134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:43.581772 containerd[1990]: time="2025-12-12T18:46:43.581413785Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.85556038s" Dec 12 18:46:43.581772 containerd[1990]: time="2025-12-12T18:46:43.581446857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 12 18:46:43.582965 containerd[1990]: time="2025-12-12T18:46:43.582943983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 18:46:43.604583 containerd[1990]: time="2025-12-12T18:46:43.604538313Z" level=info msg="CreateContainer within sandbox \"4aae574f170beebc45c271c4103fd914ea973944c7bccad63cc8981ed3db092d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 18:46:43.619055 containerd[1990]: time="2025-12-12T18:46:43.616872380Z" level=info msg="Container a7992f48b7b6d79fc7883b480120be357071222474f73a3bc0a2af9e8f9b08a6: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:43.623562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount99225399.mount: Deactivated successfully. Dec 12 18:46:43.629719 containerd[1990]: time="2025-12-12T18:46:43.629672933Z" level=info msg="CreateContainer within sandbox \"4aae574f170beebc45c271c4103fd914ea973944c7bccad63cc8981ed3db092d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a7992f48b7b6d79fc7883b480120be357071222474f73a3bc0a2af9e8f9b08a6\"" Dec 12 18:46:43.631127 containerd[1990]: time="2025-12-12T18:46:43.630758809Z" level=info msg="StartContainer for \"a7992f48b7b6d79fc7883b480120be357071222474f73a3bc0a2af9e8f9b08a6\"" Dec 12 18:46:43.633325 containerd[1990]: time="2025-12-12T18:46:43.633236585Z" level=info msg="connecting to shim a7992f48b7b6d79fc7883b480120be357071222474f73a3bc0a2af9e8f9b08a6" address="unix:///run/containerd/s/35eb5f66ca32f65b999ed94085f423ae8f594a5b0a5f7e62bc4ac471fb79ba11" protocol=ttrpc version=3 Dec 12 18:46:43.687272 systemd[1]: Started cri-containerd-a7992f48b7b6d79fc7883b480120be357071222474f73a3bc0a2af9e8f9b08a6.scope - libcontainer container a7992f48b7b6d79fc7883b480120be357071222474f73a3bc0a2af9e8f9b08a6. Dec 12 18:46:43.752368 containerd[1990]: time="2025-12-12T18:46:43.752324954Z" level=info msg="StartContainer for \"a7992f48b7b6d79fc7883b480120be357071222474f73a3bc0a2af9e8f9b08a6\" returns successfully" Dec 12 18:46:44.211144 kubelet[3543]: E1212 18:46:44.211087 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8rtgc" podUID="c534bc62-f909-4723-a1ce-dd8a325ef04d" Dec 12 18:46:44.406153 kubelet[3543]: E1212 18:46:44.406117 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.406153 kubelet[3543]: W1212 18:46:44.406144 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.408407 kubelet[3543]: E1212 18:46:44.408357 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.409011 kubelet[3543]: E1212 18:46:44.408980 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.409011 kubelet[3543]: W1212 18:46:44.409004 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.409333 kubelet[3543]: E1212 18:46:44.409028 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.409333 kubelet[3543]: E1212 18:46:44.409302 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.409333 kubelet[3543]: W1212 18:46:44.409314 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.409333 kubelet[3543]: E1212 18:46:44.409329 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.409599 kubelet[3543]: E1212 18:46:44.409578 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.409599 kubelet[3543]: W1212 18:46:44.409588 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.409750 kubelet[3543]: E1212 18:46:44.409613 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.409865 kubelet[3543]: E1212 18:46:44.409843 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.409865 kubelet[3543]: W1212 18:46:44.409857 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.410008 kubelet[3543]: E1212 18:46:44.409870 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.410108 kubelet[3543]: E1212 18:46:44.410090 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.410108 kubelet[3543]: W1212 18:46:44.410101 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.410274 kubelet[3543]: E1212 18:46:44.410113 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.410344 kubelet[3543]: E1212 18:46:44.410300 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.410344 kubelet[3543]: W1212 18:46:44.410309 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.410344 kubelet[3543]: E1212 18:46:44.410321 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.410564 kubelet[3543]: E1212 18:46:44.410527 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.410564 kubelet[3543]: W1212 18:46:44.410542 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.410664 kubelet[3543]: E1212 18:46:44.410564 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.410826 kubelet[3543]: E1212 18:46:44.410805 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.410826 kubelet[3543]: W1212 18:46:44.410821 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.410932 kubelet[3543]: E1212 18:46:44.410834 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.411115 kubelet[3543]: E1212 18:46:44.411098 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.411115 kubelet[3543]: W1212 18:46:44.411112 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.411346 kubelet[3543]: E1212 18:46:44.411124 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.411346 kubelet[3543]: E1212 18:46:44.411338 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.411438 kubelet[3543]: W1212 18:46:44.411351 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.411438 kubelet[3543]: E1212 18:46:44.411364 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.411612 kubelet[3543]: E1212 18:46:44.411576 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.411612 kubelet[3543]: W1212 18:46:44.411587 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.411612 kubelet[3543]: E1212 18:46:44.411599 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.411835 kubelet[3543]: E1212 18:46:44.411804 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.411835 kubelet[3543]: W1212 18:46:44.411814 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.411835 kubelet[3543]: E1212 18:46:44.411825 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.412067 kubelet[3543]: E1212 18:46:44.412019 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.412067 kubelet[3543]: W1212 18:46:44.412052 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.412067 kubelet[3543]: E1212 18:46:44.412065 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.412297 kubelet[3543]: E1212 18:46:44.412276 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.412297 kubelet[3543]: W1212 18:46:44.412285 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.412373 kubelet[3543]: E1212 18:46:44.412297 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.503061 kubelet[3543]: E1212 18:46:44.500342 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.503061 kubelet[3543]: W1212 18:46:44.500375 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.503061 kubelet[3543]: E1212 18:46:44.500394 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.503061 kubelet[3543]: E1212 18:46:44.500621 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.503061 kubelet[3543]: W1212 18:46:44.500628 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.503061 kubelet[3543]: E1212 18:46:44.500636 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.503061 kubelet[3543]: E1212 18:46:44.501174 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.503061 kubelet[3543]: W1212 18:46:44.501185 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.503061 kubelet[3543]: E1212 18:46:44.501204 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.503061 kubelet[3543]: E1212 18:46:44.501409 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.503482 kubelet[3543]: W1212 18:46:44.501417 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.503482 kubelet[3543]: E1212 18:46:44.501426 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.503482 kubelet[3543]: E1212 18:46:44.501690 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.503482 kubelet[3543]: W1212 18:46:44.501699 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.503482 kubelet[3543]: E1212 18:46:44.501707 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.503482 kubelet[3543]: E1212 18:46:44.501903 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.503482 kubelet[3543]: W1212 18:46:44.501909 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.503482 kubelet[3543]: E1212 18:46:44.501918 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.503482 kubelet[3543]: E1212 18:46:44.502141 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.503482 kubelet[3543]: W1212 18:46:44.502149 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.503805 kubelet[3543]: E1212 18:46:44.502158 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.503805 kubelet[3543]: E1212 18:46:44.502339 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.503805 kubelet[3543]: W1212 18:46:44.502365 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.503805 kubelet[3543]: E1212 18:46:44.502374 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.503805 kubelet[3543]: E1212 18:46:44.502547 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.503805 kubelet[3543]: W1212 18:46:44.502554 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.503805 kubelet[3543]: E1212 18:46:44.502564 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.503805 kubelet[3543]: E1212 18:46:44.502783 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.503805 kubelet[3543]: W1212 18:46:44.502791 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.503805 kubelet[3543]: E1212 18:46:44.502799 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.504389 kubelet[3543]: E1212 18:46:44.503003 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.504389 kubelet[3543]: W1212 18:46:44.503013 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.504389 kubelet[3543]: E1212 18:46:44.503021 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.504389 kubelet[3543]: E1212 18:46:44.503558 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.504389 kubelet[3543]: W1212 18:46:44.503566 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.504389 kubelet[3543]: E1212 18:46:44.503575 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.504389 kubelet[3543]: E1212 18:46:44.503805 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.504389 kubelet[3543]: W1212 18:46:44.503812 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.504389 kubelet[3543]: E1212 18:46:44.503821 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.504389 kubelet[3543]: E1212 18:46:44.503976 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.505212 kubelet[3543]: W1212 18:46:44.504001 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.505212 kubelet[3543]: E1212 18:46:44.504009 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.505212 kubelet[3543]: E1212 18:46:44.504204 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.505212 kubelet[3543]: W1212 18:46:44.504227 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.505212 kubelet[3543]: E1212 18:46:44.504236 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.505212 kubelet[3543]: E1212 18:46:44.504423 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.505212 kubelet[3543]: W1212 18:46:44.504430 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.505212 kubelet[3543]: E1212 18:46:44.504438 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.505212 kubelet[3543]: E1212 18:46:44.504731 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.505212 kubelet[3543]: W1212 18:46:44.504743 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.505993 kubelet[3543]: E1212 18:46:44.504764 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:44.505993 kubelet[3543]: E1212 18:46:44.504990 3543 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:46:44.505993 kubelet[3543]: W1212 18:46:44.504998 3543 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:46:44.505993 kubelet[3543]: E1212 18:46:44.505007 3543 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:46:45.075335 containerd[1990]: time="2025-12-12T18:46:45.075253185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:45.077169 containerd[1990]: time="2025-12-12T18:46:45.077124888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Dec 12 18:46:45.081523 containerd[1990]: time="2025-12-12T18:46:45.079173236Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:45.090084 containerd[1990]: time="2025-12-12T18:46:45.089951578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:45.094801 containerd[1990]: time="2025-12-12T18:46:45.094749673Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.51158107s" Dec 12 18:46:45.095006 containerd[1990]: time="2025-12-12T18:46:45.094983428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 12 18:46:45.101326 containerd[1990]: time="2025-12-12T18:46:45.101263458Z" level=info msg="CreateContainer within sandbox \"f2b735d0fa846ea722d8a24b6208684e12b483b9d46fff6d4bb67d5d79b86225\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 18:46:45.117597 containerd[1990]: time="2025-12-12T18:46:45.116318920Z" level=info msg="Container 68c2d96fd1705db4bfe7ac58b14f288f09112bfe00a6d76823d73e3b0243b4f2: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:45.138173 containerd[1990]: time="2025-12-12T18:46:45.137710158Z" level=info msg="CreateContainer within sandbox \"f2b735d0fa846ea722d8a24b6208684e12b483b9d46fff6d4bb67d5d79b86225\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"68c2d96fd1705db4bfe7ac58b14f288f09112bfe00a6d76823d73e3b0243b4f2\"" Dec 12 18:46:45.143850 containerd[1990]: time="2025-12-12T18:46:45.143693606Z" level=info msg="StartContainer for \"68c2d96fd1705db4bfe7ac58b14f288f09112bfe00a6d76823d73e3b0243b4f2\"" Dec 12 18:46:45.153372 containerd[1990]: time="2025-12-12T18:46:45.153318614Z" level=info msg="connecting to shim 68c2d96fd1705db4bfe7ac58b14f288f09112bfe00a6d76823d73e3b0243b4f2" address="unix:///run/containerd/s/2772d8c203de0dc6ba196dcea44dfdceecbca15135e5859f5d7213c6cf517995" protocol=ttrpc version=3 Dec 12 18:46:45.196595 systemd[1]: Started cri-containerd-68c2d96fd1705db4bfe7ac58b14f288f09112bfe00a6d76823d73e3b0243b4f2.scope - libcontainer container 68c2d96fd1705db4bfe7ac58b14f288f09112bfe00a6d76823d73e3b0243b4f2. Dec 12 18:46:45.279002 containerd[1990]: time="2025-12-12T18:46:45.278953567Z" level=info msg="StartContainer for \"68c2d96fd1705db4bfe7ac58b14f288f09112bfe00a6d76823d73e3b0243b4f2\" returns successfully" Dec 12 18:46:45.292506 systemd[1]: cri-containerd-68c2d96fd1705db4bfe7ac58b14f288f09112bfe00a6d76823d73e3b0243b4f2.scope: Deactivated successfully. Dec 12 18:46:45.334911 containerd[1990]: time="2025-12-12T18:46:45.334739601Z" level=info msg="received container exit event container_id:\"68c2d96fd1705db4bfe7ac58b14f288f09112bfe00a6d76823d73e3b0243b4f2\" id:\"68c2d96fd1705db4bfe7ac58b14f288f09112bfe00a6d76823d73e3b0243b4f2\" pid:4221 exited_at:{seconds:1765565205 nanos:297886038}" Dec 12 18:46:45.356999 kubelet[3543]: I1212 18:46:45.356964 3543 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:46:45.383888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68c2d96fd1705db4bfe7ac58b14f288f09112bfe00a6d76823d73e3b0243b4f2-rootfs.mount: Deactivated successfully. Dec 12 18:46:45.389060 kubelet[3543]: I1212 18:46:45.388959 3543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5fbc8664c6-fqptg" podStartSLOduration=2.531636997 podStartE2EDuration="5.388938107s" podCreationTimestamp="2025-12-12 18:46:40 +0000 UTC" firstStartedPulling="2025-12-12 18:46:40.725015465 +0000 UTC m=+22.696724731" lastFinishedPulling="2025-12-12 18:46:43.582316589 +0000 UTC m=+25.554025841" observedRunningTime="2025-12-12 18:46:44.382563801 +0000 UTC m=+26.354273072" watchObservedRunningTime="2025-12-12 18:46:45.388938107 +0000 UTC m=+27.360647383" Dec 12 18:46:46.210194 kubelet[3543]: E1212 18:46:46.210149 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8rtgc" podUID="c534bc62-f909-4723-a1ce-dd8a325ef04d" Dec 12 18:46:46.363457 containerd[1990]: time="2025-12-12T18:46:46.363416159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 18:46:48.211320 kubelet[3543]: E1212 18:46:48.211265 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8rtgc" podUID="c534bc62-f909-4723-a1ce-dd8a325ef04d" Dec 12 18:46:50.052322 containerd[1990]: time="2025-12-12T18:46:50.052249845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:50.053389 containerd[1990]: time="2025-12-12T18:46:50.053177407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Dec 12 18:46:50.054560 containerd[1990]: time="2025-12-12T18:46:50.054423384Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:50.056909 containerd[1990]: time="2025-12-12T18:46:50.056864943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:50.058003 containerd[1990]: time="2025-12-12T18:46:50.057666999Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.694210452s" Dec 12 18:46:50.058003 containerd[1990]: time="2025-12-12T18:46:50.057697446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 12 18:46:50.063091 containerd[1990]: time="2025-12-12T18:46:50.063012756Z" level=info msg="CreateContainer within sandbox \"f2b735d0fa846ea722d8a24b6208684e12b483b9d46fff6d4bb67d5d79b86225\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 18:46:50.082445 containerd[1990]: time="2025-12-12T18:46:50.080603441Z" level=info msg="Container e2d58f8237ea86d6bff264cc6e27d5eb8a99dae72495d87db4bf15aff756ac01: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:50.089572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount487713518.mount: Deactivated successfully. Dec 12 18:46:50.128269 systemd[1]: Started cri-containerd-e2d58f8237ea86d6bff264cc6e27d5eb8a99dae72495d87db4bf15aff756ac01.scope - libcontainer container e2d58f8237ea86d6bff264cc6e27d5eb8a99dae72495d87db4bf15aff756ac01. Dec 12 18:46:50.143871 containerd[1990]: time="2025-12-12T18:46:50.096602816Z" level=info msg="CreateContainer within sandbox \"f2b735d0fa846ea722d8a24b6208684e12b483b9d46fff6d4bb67d5d79b86225\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e2d58f8237ea86d6bff264cc6e27d5eb8a99dae72495d87db4bf15aff756ac01\"" Dec 12 18:46:50.143871 containerd[1990]: time="2025-12-12T18:46:50.097186698Z" level=info msg="StartContainer for \"e2d58f8237ea86d6bff264cc6e27d5eb8a99dae72495d87db4bf15aff756ac01\"" Dec 12 18:46:50.143871 containerd[1990]: time="2025-12-12T18:46:50.098639838Z" level=info msg="connecting to shim e2d58f8237ea86d6bff264cc6e27d5eb8a99dae72495d87db4bf15aff756ac01" address="unix:///run/containerd/s/2772d8c203de0dc6ba196dcea44dfdceecbca15135e5859f5d7213c6cf517995" protocol=ttrpc version=3 Dec 12 18:46:50.213208 kubelet[3543]: E1212 18:46:50.212551 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8rtgc" podUID="c534bc62-f909-4723-a1ce-dd8a325ef04d" Dec 12 18:46:50.217986 containerd[1990]: time="2025-12-12T18:46:50.217906114Z" level=info msg="StartContainer for \"e2d58f8237ea86d6bff264cc6e27d5eb8a99dae72495d87db4bf15aff756ac01\" returns successfully" Dec 12 18:46:51.887619 systemd[1]: cri-containerd-e2d58f8237ea86d6bff264cc6e27d5eb8a99dae72495d87db4bf15aff756ac01.scope: Deactivated successfully. Dec 12 18:46:51.887997 systemd[1]: cri-containerd-e2d58f8237ea86d6bff264cc6e27d5eb8a99dae72495d87db4bf15aff756ac01.scope: Consumed 592ms CPU time, 166.9M memory peak, 6.6M read from disk, 171.3M written to disk. Dec 12 18:46:51.913450 containerd[1990]: time="2025-12-12T18:46:51.912610851Z" level=info msg="received container exit event container_id:\"e2d58f8237ea86d6bff264cc6e27d5eb8a99dae72495d87db4bf15aff756ac01\" id:\"e2d58f8237ea86d6bff264cc6e27d5eb8a99dae72495d87db4bf15aff756ac01\" pid:4280 exited_at:{seconds:1765565211 nanos:912134431}" Dec 12 18:46:51.967631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2d58f8237ea86d6bff264cc6e27d5eb8a99dae72495d87db4bf15aff756ac01-rootfs.mount: Deactivated successfully. Dec 12 18:46:51.994826 kubelet[3543]: I1212 18:46:51.994688 3543 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 18:46:52.075737 systemd[1]: Created slice kubepods-burstable-pod1f4fc72d_306f_401c_8038_da87f142a57b.slice - libcontainer container kubepods-burstable-pod1f4fc72d_306f_401c_8038_da87f142a57b.slice. Dec 12 18:46:52.088693 systemd[1]: Created slice kubepods-besteffort-podba2f2d53_b502_4a41_a1a8_fae69661a05c.slice - libcontainer container kubepods-besteffort-podba2f2d53_b502_4a41_a1a8_fae69661a05c.slice. Dec 12 18:46:52.108665 systemd[1]: Created slice kubepods-burstable-pod82756a7a_e0ea_4024_9ee9_49158171866e.slice - libcontainer container kubepods-burstable-pod82756a7a_e0ea_4024_9ee9_49158171866e.slice. Dec 12 18:46:52.119251 systemd[1]: Created slice kubepods-besteffort-pod291bd305_3797_4e86_a6bf_9a26259b5097.slice - libcontainer container kubepods-besteffort-pod291bd305_3797_4e86_a6bf_9a26259b5097.slice. Dec 12 18:46:52.128647 systemd[1]: Created slice kubepods-besteffort-pod627e8918_ce59_4b1e_a58e_99fb7e0005f5.slice - libcontainer container kubepods-besteffort-pod627e8918_ce59_4b1e_a58e_99fb7e0005f5.slice. Dec 12 18:46:52.143541 systemd[1]: Created slice kubepods-besteffort-pode69104d4_3599_4ed4_87b8_edf0ec255633.slice - libcontainer container kubepods-besteffort-pode69104d4_3599_4ed4_87b8_edf0ec255633.slice. Dec 12 18:46:52.154103 systemd[1]: Created slice kubepods-besteffort-pod9914e2c9_7a65_4cf8_bb0f_0c43fb4d4b6d.slice - libcontainer container kubepods-besteffort-pod9914e2c9_7a65_4cf8_bb0f_0c43fb4d4b6d.slice. Dec 12 18:46:52.158752 kubelet[3543]: I1212 18:46:52.158517 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzfq2\" (UniqueName: \"kubernetes.io/projected/ba2f2d53-b502-4a41-a1a8-fae69661a05c-kube-api-access-pzfq2\") pod \"calico-apiserver-6f58b74bcb-ql2z4\" (UID: \"ba2f2d53-b502-4a41-a1a8-fae69661a05c\") " pod="calico-apiserver/calico-apiserver-6f58b74bcb-ql2z4" Dec 12 18:46:52.158752 kubelet[3543]: I1212 18:46:52.158577 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82756a7a-e0ea-4024-9ee9-49158171866e-config-volume\") pod \"coredns-674b8bbfcf-tmgnz\" (UID: \"82756a7a-e0ea-4024-9ee9-49158171866e\") " pod="kube-system/coredns-674b8bbfcf-tmgnz" Dec 12 18:46:52.158752 kubelet[3543]: I1212 18:46:52.158605 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkh8j\" (UniqueName: \"kubernetes.io/projected/1f4fc72d-306f-401c-8038-da87f142a57b-kube-api-access-dkh8j\") pod \"coredns-674b8bbfcf-gc8gj\" (UID: \"1f4fc72d-306f-401c-8038-da87f142a57b\") " pod="kube-system/coredns-674b8bbfcf-gc8gj" Dec 12 18:46:52.158752 kubelet[3543]: I1212 18:46:52.158650 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/291bd305-3797-4e86-a6bf-9a26259b5097-whisker-ca-bundle\") pod \"whisker-5c6d56cd-s9b4v\" (UID: \"291bd305-3797-4e86-a6bf-9a26259b5097\") " pod="calico-system/whisker-5c6d56cd-s9b4v" Dec 12 18:46:52.158752 kubelet[3543]: I1212 18:46:52.158674 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/627e8918-ce59-4b1e-a58e-99fb7e0005f5-config\") pod \"goldmane-666569f655-sjdx8\" (UID: \"627e8918-ce59-4b1e-a58e-99fb7e0005f5\") " pod="calico-system/goldmane-666569f655-sjdx8" Dec 12 18:46:52.159126 kubelet[3543]: I1212 18:46:52.158765 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f4fc72d-306f-401c-8038-da87f142a57b-config-volume\") pod \"coredns-674b8bbfcf-gc8gj\" (UID: \"1f4fc72d-306f-401c-8038-da87f142a57b\") " pod="kube-system/coredns-674b8bbfcf-gc8gj" Dec 12 18:46:52.159126 kubelet[3543]: I1212 18:46:52.158795 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmdgl\" (UniqueName: \"kubernetes.io/projected/82756a7a-e0ea-4024-9ee9-49158171866e-kube-api-access-hmdgl\") pod \"coredns-674b8bbfcf-tmgnz\" (UID: \"82756a7a-e0ea-4024-9ee9-49158171866e\") " pod="kube-system/coredns-674b8bbfcf-tmgnz" Dec 12 18:46:52.159126 kubelet[3543]: I1212 18:46:52.158840 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d-calico-apiserver-certs\") pod \"calico-apiserver-6f58b74bcb-s6q4x\" (UID: \"9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d\") " pod="calico-apiserver/calico-apiserver-6f58b74bcb-s6q4x" Dec 12 18:46:52.159126 kubelet[3543]: I1212 18:46:52.158863 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/291bd305-3797-4e86-a6bf-9a26259b5097-whisker-backend-key-pair\") pod \"whisker-5c6d56cd-s9b4v\" (UID: \"291bd305-3797-4e86-a6bf-9a26259b5097\") " pod="calico-system/whisker-5c6d56cd-s9b4v" Dec 12 18:46:52.159126 kubelet[3543]: I1212 18:46:52.158913 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ba2f2d53-b502-4a41-a1a8-fae69661a05c-calico-apiserver-certs\") pod \"calico-apiserver-6f58b74bcb-ql2z4\" (UID: \"ba2f2d53-b502-4a41-a1a8-fae69661a05c\") " pod="calico-apiserver/calico-apiserver-6f58b74bcb-ql2z4" Dec 12 18:46:52.159350 kubelet[3543]: I1212 18:46:52.158938 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsgsq\" (UniqueName: \"kubernetes.io/projected/627e8918-ce59-4b1e-a58e-99fb7e0005f5-kube-api-access-lsgsq\") pod \"goldmane-666569f655-sjdx8\" (UID: \"627e8918-ce59-4b1e-a58e-99fb7e0005f5\") " pod="calico-system/goldmane-666569f655-sjdx8" Dec 12 18:46:52.159350 kubelet[3543]: I1212 18:46:52.158979 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e69104d4-3599-4ed4-87b8-edf0ec255633-tigera-ca-bundle\") pod \"calico-kube-controllers-69dcd64969-ztnlv\" (UID: \"e69104d4-3599-4ed4-87b8-edf0ec255633\") " pod="calico-system/calico-kube-controllers-69dcd64969-ztnlv" Dec 12 18:46:52.159350 kubelet[3543]: I1212 18:46:52.159018 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qj2s\" (UniqueName: \"kubernetes.io/projected/e69104d4-3599-4ed4-87b8-edf0ec255633-kube-api-access-9qj2s\") pod \"calico-kube-controllers-69dcd64969-ztnlv\" (UID: \"e69104d4-3599-4ed4-87b8-edf0ec255633\") " pod="calico-system/calico-kube-controllers-69dcd64969-ztnlv" Dec 12 18:46:52.159350 kubelet[3543]: I1212 18:46:52.159146 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfj4b\" (UniqueName: \"kubernetes.io/projected/291bd305-3797-4e86-a6bf-9a26259b5097-kube-api-access-sfj4b\") pod \"whisker-5c6d56cd-s9b4v\" (UID: \"291bd305-3797-4e86-a6bf-9a26259b5097\") " pod="calico-system/whisker-5c6d56cd-s9b4v" Dec 12 18:46:52.159350 kubelet[3543]: I1212 18:46:52.159177 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/627e8918-ce59-4b1e-a58e-99fb7e0005f5-goldmane-ca-bundle\") pod \"goldmane-666569f655-sjdx8\" (UID: \"627e8918-ce59-4b1e-a58e-99fb7e0005f5\") " pod="calico-system/goldmane-666569f655-sjdx8" Dec 12 18:46:52.161294 kubelet[3543]: I1212 18:46:52.159337 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/627e8918-ce59-4b1e-a58e-99fb7e0005f5-goldmane-key-pair\") pod \"goldmane-666569f655-sjdx8\" (UID: \"627e8918-ce59-4b1e-a58e-99fb7e0005f5\") " pod="calico-system/goldmane-666569f655-sjdx8" Dec 12 18:46:52.161294 kubelet[3543]: I1212 18:46:52.159408 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lgzl\" (UniqueName: \"kubernetes.io/projected/9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d-kube-api-access-8lgzl\") pod \"calico-apiserver-6f58b74bcb-s6q4x\" (UID: \"9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d\") " pod="calico-apiserver/calico-apiserver-6f58b74bcb-s6q4x" Dec 12 18:46:52.220450 systemd[1]: Created slice kubepods-besteffort-podc534bc62_f909_4723_a1ce_dd8a325ef04d.slice - libcontainer container kubepods-besteffort-podc534bc62_f909_4723_a1ce_dd8a325ef04d.slice. Dec 12 18:46:52.237050 containerd[1990]: time="2025-12-12T18:46:52.236969054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8rtgc,Uid:c534bc62-f909-4723-a1ce-dd8a325ef04d,Namespace:calico-system,Attempt:0,}" Dec 12 18:46:52.444498 containerd[1990]: time="2025-12-12T18:46:52.443862670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tmgnz,Uid:82756a7a-e0ea-4024-9ee9-49158171866e,Namespace:kube-system,Attempt:0,}" Dec 12 18:46:52.444941 containerd[1990]: time="2025-12-12T18:46:52.443913376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gc8gj,Uid:1f4fc72d-306f-401c-8038-da87f142a57b,Namespace:kube-system,Attempt:0,}" Dec 12 18:46:52.445205 containerd[1990]: time="2025-12-12T18:46:52.443993414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f58b74bcb-ql2z4,Uid:ba2f2d53-b502-4a41-a1a8-fae69661a05c,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:46:52.445205 containerd[1990]: time="2025-12-12T18:46:52.444189180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c6d56cd-s9b4v,Uid:291bd305-3797-4e86-a6bf-9a26259b5097,Namespace:calico-system,Attempt:0,}" Dec 12 18:46:52.445352 containerd[1990]: time="2025-12-12T18:46:52.444218905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-sjdx8,Uid:627e8918-ce59-4b1e-a58e-99fb7e0005f5,Namespace:calico-system,Attempt:0,}" Dec 12 18:46:52.453698 containerd[1990]: time="2025-12-12T18:46:52.453651233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69dcd64969-ztnlv,Uid:e69104d4-3599-4ed4-87b8-edf0ec255633,Namespace:calico-system,Attempt:0,}" Dec 12 18:46:52.462766 containerd[1990]: time="2025-12-12T18:46:52.462708483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f58b74bcb-s6q4x,Uid:9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:46:52.509815 containerd[1990]: time="2025-12-12T18:46:52.509693738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 18:46:55.587113 containerd[1990]: time="2025-12-12T18:46:55.584399337Z" level=error msg="Failed to destroy network for sandbox \"96d3046e664d642afd244eacbcc070b1b91148e10b06981a04e1558ab62bd972\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:55.590232 systemd[1]: run-netns-cni\x2dd566e975\x2d07d3\x2d3226\x2d7678\x2de601ef7dcd7f.mount: Deactivated successfully. Dec 12 18:46:55.600002 containerd[1990]: time="2025-12-12T18:46:55.599800486Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69dcd64969-ztnlv,Uid:e69104d4-3599-4ed4-87b8-edf0ec255633,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"96d3046e664d642afd244eacbcc070b1b91148e10b06981a04e1558ab62bd972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:55.669396 kubelet[3543]: E1212 18:46:55.669337 3543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96d3046e664d642afd244eacbcc070b1b91148e10b06981a04e1558ab62bd972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:55.669866 kubelet[3543]: E1212 18:46:55.669422 3543 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96d3046e664d642afd244eacbcc070b1b91148e10b06981a04e1558ab62bd972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69dcd64969-ztnlv" Dec 12 18:46:55.669866 kubelet[3543]: E1212 18:46:55.669449 3543 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96d3046e664d642afd244eacbcc070b1b91148e10b06981a04e1558ab62bd972\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69dcd64969-ztnlv" Dec 12 18:46:55.669866 kubelet[3543]: E1212 18:46:55.669526 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69dcd64969-ztnlv_calico-system(e69104d4-3599-4ed4-87b8-edf0ec255633)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69dcd64969-ztnlv_calico-system(e69104d4-3599-4ed4-87b8-edf0ec255633)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96d3046e664d642afd244eacbcc070b1b91148e10b06981a04e1558ab62bd972\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69dcd64969-ztnlv" podUID="e69104d4-3599-4ed4-87b8-edf0ec255633" Dec 12 18:46:55.672935 containerd[1990]: time="2025-12-12T18:46:55.672886567Z" level=error msg="Failed to destroy network for sandbox \"1564651bdf38c82d9c9e032664f0d8321f4a63b9049adde8a934083ea46b8854\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:55.681473 systemd[1]: run-netns-cni\x2dc30dd7b8\x2dc595\x2d5f11\x2d3ca3\x2d119bcd10abb9.mount: Deactivated successfully. Dec 12 18:46:55.700493 containerd[1990]: time="2025-12-12T18:46:55.700418495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c6d56cd-s9b4v,Uid:291bd305-3797-4e86-a6bf-9a26259b5097,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1564651bdf38c82d9c9e032664f0d8321f4a63b9049adde8a934083ea46b8854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:55.700692 containerd[1990]: time="2025-12-12T18:46:55.700593834Z" level=error msg="Failed to destroy network for sandbox \"e54b4998f685e30d21f69f48b996cd75e36c54277d845d75e0d2649b50d8a5a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:55.701128 containerd[1990]: time="2025-12-12T18:46:55.701094370Z" level=error msg="Failed to destroy network for sandbox \"6a65acec370f43c26ceb7016bc1f6a9e6b75699ef583f5c8a9e88ebe419e7780\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:55.701458 containerd[1990]: time="2025-12-12T18:46:55.701429347Z" level=error msg="Failed to destroy network for sandbox \"c6826040ecb06d7b82dbedd78d1974f36bf88bed88678ee0ab785ccb0eae5123\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:55.702566 containerd[1990]: time="2025-12-12T18:46:55.702528211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gc8gj,Uid:1f4fc72d-306f-401c-8038-da87f142a57b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e54b4998f685e30d21f69f48b996cd75e36c54277d845d75e0d2649b50d8a5a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:55.706186 containerd[1990]: time="2025-12-12T18:46:55.706135809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-sjdx8,Uid:627e8918-ce59-4b1e-a58e-99fb7e0005f5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a65acec370f43c26ceb7016bc1f6a9e6b75699ef583f5c8a9e88ebe419e7780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:55.707272 containerd[1990]: time="2025-12-12T18:46:55.707231997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8rtgc,Uid:c534bc62-f909-4723-a1ce-dd8a325ef04d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6826040ecb06d7b82dbedd78d1974f36bf88bed88678ee0ab785ccb0eae5123\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:55.707633 containerd[1990]: time="2025-12-12T18:46:55.707607920Z" level=error msg="Failed to destroy network for sandbox \"ffb36d77f4f5c03d2869efe2a895e9dc4556fdf5e1a023d57590d34f9bc8e87c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:55.708221 kubelet[3543]: E1212 18:46:55.708002 3543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6826040ecb06d7b82dbedd78d1974f36bf88bed88678ee0ab785ccb0eae5123\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:55.709619 kubelet[3543]: E1212 18:46:55.708089 3543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1564651bdf38c82d9c9e032664f0d8321f4a63b9049adde8a934083ea46b8854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:55.709619 kubelet[3543]: E1212 18:46:55.709402 3543 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1564651bdf38c82d9c9e032664f0d8321f4a63b9049adde8a934083ea46b8854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c6d56cd-s9b4v" Dec 12 18:46:55.709619 kubelet[3543]: E1212 18:46:55.709442 3543 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1564651bdf38c82d9c9e032664f0d8321f4a63b9049adde8a934083ea46b8854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c6d56cd-s9b4v" Dec 12 18:46:55.710227 kubelet[3543]: E1212 18:46:55.709505 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5c6d56cd-s9b4v_calico-system(291bd305-3797-4e86-a6bf-9a26259b5097)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5c6d56cd-s9b4v_calico-system(291bd305-3797-4e86-a6bf-9a26259b5097)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1564651bdf38c82d9c9e032664f0d8321f4a63b9049adde8a934083ea46b8854\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c6d56cd-s9b4v" podUID="291bd305-3797-4e86-a6bf-9a26259b5097" Dec 12 18:46:55.710415 containerd[1990]: time="2025-12-12T18:46:55.710381295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f58b74bcb-ql2z4,Uid:ba2f2d53-b502-4a41-a1a8-fae69661a05c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffb36d77f4f5c03d2869efe2a895e9dc4556fdf5e1a023d57590d34f9bc8e87c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:55.711862 systemd[1]: run-netns-cni\x2d784374e0\x2d8318\x2ddf49\x2d4325\x2dc42b0c064f1f.mount: Deactivated successfully. Dec 12 18:46:55.712003 systemd[1]: run-netns-cni\x2d3397958e\x2d5f3a\x2dcd86\x2de247\x2d8c4177359b1a.mount: Deactivated successfully. Dec 12 18:46:55.713198 kubelet[3543]: E1212 18:46:55.713139 3543 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6826040ecb06d7b82dbedd78d1974f36bf88bed88678ee0ab785ccb0eae5123\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8rtgc" Dec 12 18:46:55.713284 kubelet[3543]: E1212 18:46:55.713201 3543 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6826040ecb06d7b82dbedd78d1974f36bf88bed88678ee0ab785ccb0eae5123\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8rtgc" Dec 12 18:46:55.713329 kubelet[3543]: E1212 18:46:55.713271 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8rtgc_calico-system(c534bc62-f909-4723-a1ce-dd8a325ef04d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8rtgc_calico-system(c534bc62-f909-4723-a1ce-dd8a325ef04d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6826040ecb06d7b82dbedd78d1974f36bf88bed88678ee0ab785ccb0eae5123\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8rtgc" podUID="c534bc62-f909-4723-a1ce-dd8a325ef04d" Dec 12 18:46:55.713329 kubelet[3543]: E1212 18:46:55.708142 3543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a65acec370f43c26ceb7016bc1f6a9e6b75699ef583f5c8a9e88ebe419e7780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:55.713468 kubelet[3543]: E1212 18:46:55.713343 3543 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a65acec370f43c26ceb7016bc1f6a9e6b75699ef583f5c8a9e88ebe419e7780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-sjdx8" Dec 12 18:46:55.713468 kubelet[3543]: E1212 18:46:55.713361 3543 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a65acec370f43c26ceb7016bc1f6a9e6b75699ef583f5c8a9e88ebe419e7780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-sjdx8" Dec 12 18:46:55.713468 kubelet[3543]: E1212 18:46:55.713401 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-sjdx8_calico-system(627e8918-ce59-4b1e-a58e-99fb7e0005f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-sjdx8_calico-system(627e8918-ce59-4b1e-a58e-99fb7e0005f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a65acec370f43c26ceb7016bc1f6a9e6b75699ef583f5c8a9e88ebe419e7780\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-sjdx8" podUID="627e8918-ce59-4b1e-a58e-99fb7e0005f5" Dec 12 18:46:55.713622 kubelet[3543]: E1212 18:46:55.708113 3543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e54b4998f685e30d21f69f48b996cd75e36c54277d845d75e0d2649b50d8a5a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:55.713622 kubelet[3543]: E1212 18:46:55.713440 3543 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e54b4998f685e30d21f69f48b996cd75e36c54277d845d75e0d2649b50d8a5a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gc8gj" Dec 12 18:46:55.713622 kubelet[3543]: E1212 18:46:55.713458 3543 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e54b4998f685e30d21f69f48b996cd75e36c54277d845d75e0d2649b50d8a5a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gc8gj" Dec 12 18:46:55.713798 kubelet[3543]: E1212 18:46:55.713494 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gc8gj_kube-system(1f4fc72d-306f-401c-8038-da87f142a57b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gc8gj_kube-system(1f4fc72d-306f-401c-8038-da87f142a57b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e54b4998f685e30d21f69f48b996cd75e36c54277d845d75e0d2649b50d8a5a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gc8gj" podUID="1f4fc72d-306f-401c-8038-da87f142a57b" Dec 12 18:46:55.715047 kubelet[3543]: E1212 18:46:55.714758 3543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffb36d77f4f5c03d2869efe2a895e9dc4556fdf5e1a023d57590d34f9bc8e87c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:55.715047 kubelet[3543]: E1212 18:46:55.714817 3543 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffb36d77f4f5c03d2869efe2a895e9dc4556fdf5e1a023d57590d34f9bc8e87c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f58b74bcb-ql2z4" Dec 12 18:46:55.715047 kubelet[3543]: E1212 18:46:55.714845 3543 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffb36d77f4f5c03d2869efe2a895e9dc4556fdf5e1a023d57590d34f9bc8e87c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f58b74bcb-ql2z4" Dec 12 18:46:55.715220 kubelet[3543]: E1212 18:46:55.714900 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f58b74bcb-ql2z4_calico-apiserver(ba2f2d53-b502-4a41-a1a8-fae69661a05c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f58b74bcb-ql2z4_calico-apiserver(ba2f2d53-b502-4a41-a1a8-fae69661a05c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffb36d77f4f5c03d2869efe2a895e9dc4556fdf5e1a023d57590d34f9bc8e87c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f58b74bcb-ql2z4" podUID="ba2f2d53-b502-4a41-a1a8-fae69661a05c" Dec 12 18:46:55.755836 kubelet[3543]: E1212 18:46:55.750813 3543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b62c92c317d85cf9f402e6e6c5b89c9311cc3c6e9763b5b3e496001ea25a93bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:55.755836 kubelet[3543]: E1212 18:46:55.750889 3543 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b62c92c317d85cf9f402e6e6c5b89c9311cc3c6e9763b5b3e496001ea25a93bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tmgnz" Dec 12 18:46:55.755836 kubelet[3543]: E1212 18:46:55.750914 3543 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b62c92c317d85cf9f402e6e6c5b89c9311cc3c6e9763b5b3e496001ea25a93bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tmgnz" Dec 12 18:46:55.756598 containerd[1990]: time="2025-12-12T18:46:55.742235472Z" level=error msg="Failed to destroy network for sandbox \"b62c92c317d85cf9f402e6e6c5b89c9311cc3c6e9763b5b3e496001ea25a93bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:55.756598 containerd[1990]: time="2025-12-12T18:46:55.744254301Z" level=error msg="Failed to destroy network for sandbox \"0062d57a3281150e0ef70a05c2156c3c26c273a0e86e40ecbfc6518a9ac56d6e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:55.756598 containerd[1990]: time="2025-12-12T18:46:55.750474138Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tmgnz,Uid:82756a7a-e0ea-4024-9ee9-49158171866e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b62c92c317d85cf9f402e6e6c5b89c9311cc3c6e9763b5b3e496001ea25a93bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:55.715230 systemd[1]: run-netns-cni\x2d634f0703\x2d665b\x2d0481\x2d6a6f\x2d5c621b3e8697.mount: Deactivated successfully. Dec 12 18:46:55.758365 kubelet[3543]: E1212 18:46:55.750976 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-tmgnz_kube-system(82756a7a-e0ea-4024-9ee9-49158171866e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-tmgnz_kube-system(82756a7a-e0ea-4024-9ee9-49158171866e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b62c92c317d85cf9f402e6e6c5b89c9311cc3c6e9763b5b3e496001ea25a93bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-tmgnz" podUID="82756a7a-e0ea-4024-9ee9-49158171866e" Dec 12 18:46:55.729661 systemd[1]: run-netns-cni\x2d09754ba1\x2dbacc\x2dff11\x2d4eab\x2d737cabf5ecc5.mount: Deactivated successfully. Dec 12 18:46:55.746374 systemd[1]: run-netns-cni\x2df5bb30e0\x2dea0c\x2d8819\x2d4e44\x2d25f7a2685939.mount: Deactivated successfully. Dec 12 18:46:55.760357 containerd[1990]: time="2025-12-12T18:46:55.760283371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f58b74bcb-s6q4x,Uid:9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0062d57a3281150e0ef70a05c2156c3c26c273a0e86e40ecbfc6518a9ac56d6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:55.761417 kubelet[3543]: E1212 18:46:55.761119 3543 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0062d57a3281150e0ef70a05c2156c3c26c273a0e86e40ecbfc6518a9ac56d6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:46:55.762258 kubelet[3543]: E1212 18:46:55.762220 3543 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0062d57a3281150e0ef70a05c2156c3c26c273a0e86e40ecbfc6518a9ac56d6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f58b74bcb-s6q4x" Dec 12 18:46:55.762356 kubelet[3543]: E1212 18:46:55.762260 3543 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0062d57a3281150e0ef70a05c2156c3c26c273a0e86e40ecbfc6518a9ac56d6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f58b74bcb-s6q4x" Dec 12 18:46:55.762356 kubelet[3543]: E1212 18:46:55.762324 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f58b74bcb-s6q4x_calico-apiserver(9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f58b74bcb-s6q4x_calico-apiserver(9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0062d57a3281150e0ef70a05c2156c3c26c273a0e86e40ecbfc6518a9ac56d6e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f58b74bcb-s6q4x" podUID="9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d" Dec 12 18:46:56.595895 systemd[1]: run-netns-cni\x2d8e869f05\x2dc1a9\x2da715\x2d457b\x2d28a148a7263c.mount: Deactivated successfully. Dec 12 18:46:59.952759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1576654921.mount: Deactivated successfully. Dec 12 18:47:00.112696 containerd[1990]: time="2025-12-12T18:47:00.111544490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Dec 12 18:47:00.113307 containerd[1990]: time="2025-12-12T18:47:00.088547556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:47:00.122990 containerd[1990]: time="2025-12-12T18:47:00.122937173Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:47:00.124281 containerd[1990]: time="2025-12-12T18:47:00.124240300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:47:00.129163 containerd[1990]: time="2025-12-12T18:47:00.129106242Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.613970844s" Dec 12 18:47:00.129371 containerd[1990]: time="2025-12-12T18:47:00.129350480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 12 18:47:00.192059 containerd[1990]: time="2025-12-12T18:47:00.191971362Z" level=info msg="CreateContainer within sandbox \"f2b735d0fa846ea722d8a24b6208684e12b483b9d46fff6d4bb67d5d79b86225\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 18:47:00.278091 containerd[1990]: time="2025-12-12T18:47:00.276646254Z" level=info msg="Container 2f63fccfe56cb77621e4f77866aa1ced3a055501c7eb7d873dd893ddf34ba84e: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:47:00.280843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1522504531.mount: Deactivated successfully. Dec 12 18:47:00.331925 containerd[1990]: time="2025-12-12T18:47:00.331870929Z" level=info msg="CreateContainer within sandbox \"f2b735d0fa846ea722d8a24b6208684e12b483b9d46fff6d4bb67d5d79b86225\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2f63fccfe56cb77621e4f77866aa1ced3a055501c7eb7d873dd893ddf34ba84e\"" Dec 12 18:47:00.333330 containerd[1990]: time="2025-12-12T18:47:00.333284762Z" level=info msg="StartContainer for \"2f63fccfe56cb77621e4f77866aa1ced3a055501c7eb7d873dd893ddf34ba84e\"" Dec 12 18:47:00.341643 containerd[1990]: time="2025-12-12T18:47:00.341592407Z" level=info msg="connecting to shim 2f63fccfe56cb77621e4f77866aa1ced3a055501c7eb7d873dd893ddf34ba84e" address="unix:///run/containerd/s/2772d8c203de0dc6ba196dcea44dfdceecbca15135e5859f5d7213c6cf517995" protocol=ttrpc version=3 Dec 12 18:47:00.458256 systemd[1]: Started cri-containerd-2f63fccfe56cb77621e4f77866aa1ced3a055501c7eb7d873dd893ddf34ba84e.scope - libcontainer container 2f63fccfe56cb77621e4f77866aa1ced3a055501c7eb7d873dd893ddf34ba84e. Dec 12 18:47:00.560469 containerd[1990]: time="2025-12-12T18:47:00.560373804Z" level=info msg="StartContainer for \"2f63fccfe56cb77621e4f77866aa1ced3a055501c7eb7d873dd893ddf34ba84e\" returns successfully" Dec 12 18:47:00.618192 kubelet[3543]: I1212 18:47:00.614519 3543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9fzlv" podStartSLOduration=1.409971211 podStartE2EDuration="20.614496827s" podCreationTimestamp="2025-12-12 18:46:40 +0000 UTC" firstStartedPulling="2025-12-12 18:46:40.938419017 +0000 UTC m=+22.910128269" lastFinishedPulling="2025-12-12 18:47:00.142944632 +0000 UTC m=+42.114653885" observedRunningTime="2025-12-12 18:47:00.613693346 +0000 UTC m=+42.585402621" watchObservedRunningTime="2025-12-12 18:47:00.614496827 +0000 UTC m=+42.586206103" Dec 12 18:47:00.861108 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 18:47:00.893087 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 18:47:01.625439 kubelet[3543]: I1212 18:47:01.621075 3543 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:47:03.305287 kubelet[3543]: I1212 18:47:03.304622 3543 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfj4b\" (UniqueName: \"kubernetes.io/projected/291bd305-3797-4e86-a6bf-9a26259b5097-kube-api-access-sfj4b\") pod \"291bd305-3797-4e86-a6bf-9a26259b5097\" (UID: \"291bd305-3797-4e86-a6bf-9a26259b5097\") " Dec 12 18:47:03.305287 kubelet[3543]: I1212 18:47:03.304683 3543 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/291bd305-3797-4e86-a6bf-9a26259b5097-whisker-ca-bundle\") pod \"291bd305-3797-4e86-a6bf-9a26259b5097\" (UID: \"291bd305-3797-4e86-a6bf-9a26259b5097\") " Dec 12 18:47:03.305287 kubelet[3543]: I1212 18:47:03.304750 3543 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/291bd305-3797-4e86-a6bf-9a26259b5097-whisker-backend-key-pair\") pod \"291bd305-3797-4e86-a6bf-9a26259b5097\" (UID: \"291bd305-3797-4e86-a6bf-9a26259b5097\") " Dec 12 18:47:03.399973 systemd[1]: var-lib-kubelet-pods-291bd305\x2d3797\x2d4e86\x2da6bf\x2d9a26259b5097-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsfj4b.mount: Deactivated successfully. Dec 12 18:47:03.420513 systemd[1]: var-lib-kubelet-pods-291bd305\x2d3797\x2d4e86\x2da6bf\x2d9a26259b5097-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 18:47:03.440342 kubelet[3543]: I1212 18:47:03.401026 3543 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/291bd305-3797-4e86-a6bf-9a26259b5097-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "291bd305-3797-4e86-a6bf-9a26259b5097" (UID: "291bd305-3797-4e86-a6bf-9a26259b5097"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 18:47:03.440342 kubelet[3543]: I1212 18:47:03.403517 3543 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/291bd305-3797-4e86-a6bf-9a26259b5097-kube-api-access-sfj4b" (OuterVolumeSpecName: "kube-api-access-sfj4b") pod "291bd305-3797-4e86-a6bf-9a26259b5097" (UID: "291bd305-3797-4e86-a6bf-9a26259b5097"). InnerVolumeSpecName "kube-api-access-sfj4b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:47:03.440342 kubelet[3543]: I1212 18:47:03.417247 3543 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291bd305-3797-4e86-a6bf-9a26259b5097-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "291bd305-3797-4e86-a6bf-9a26259b5097" (UID: "291bd305-3797-4e86-a6bf-9a26259b5097"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 18:47:03.440342 kubelet[3543]: I1212 18:47:03.418502 3543 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/291bd305-3797-4e86-a6bf-9a26259b5097-whisker-backend-key-pair\") on node \"ip-172-31-25-153\" DevicePath \"\"" Dec 12 18:47:03.440342 kubelet[3543]: I1212 18:47:03.418534 3543 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sfj4b\" (UniqueName: \"kubernetes.io/projected/291bd305-3797-4e86-a6bf-9a26259b5097-kube-api-access-sfj4b\") on node \"ip-172-31-25-153\" DevicePath \"\"" Dec 12 18:47:03.440342 kubelet[3543]: I1212 18:47:03.418548 3543 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/291bd305-3797-4e86-a6bf-9a26259b5097-whisker-ca-bundle\") on node \"ip-172-31-25-153\" DevicePath \"\"" Dec 12 18:47:03.640752 systemd[1]: Removed slice kubepods-besteffort-pod291bd305_3797_4e86_a6bf_9a26259b5097.slice - libcontainer container kubepods-besteffort-pod291bd305_3797_4e86_a6bf_9a26259b5097.slice. Dec 12 18:47:04.257156 kubelet[3543]: I1212 18:47:04.257106 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tjf9\" (UniqueName: \"kubernetes.io/projected/20a2fd47-4a22-4521-b92b-0d8c954400d5-kube-api-access-5tjf9\") pod \"whisker-6448458988-2gsdl\" (UID: \"20a2fd47-4a22-4521-b92b-0d8c954400d5\") " pod="calico-system/whisker-6448458988-2gsdl" Dec 12 18:47:04.261894 kubelet[3543]: I1212 18:47:04.261857 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/20a2fd47-4a22-4521-b92b-0d8c954400d5-whisker-backend-key-pair\") pod \"whisker-6448458988-2gsdl\" (UID: \"20a2fd47-4a22-4521-b92b-0d8c954400d5\") " pod="calico-system/whisker-6448458988-2gsdl" Dec 12 18:47:04.268903 kubelet[3543]: I1212 18:47:04.259464 3543 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="291bd305-3797-4e86-a6bf-9a26259b5097" path="/var/lib/kubelet/pods/291bd305-3797-4e86-a6bf-9a26259b5097/volumes" Dec 12 18:47:04.270797 kubelet[3543]: I1212 18:47:04.269127 3543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20a2fd47-4a22-4521-b92b-0d8c954400d5-whisker-ca-bundle\") pod \"whisker-6448458988-2gsdl\" (UID: \"20a2fd47-4a22-4521-b92b-0d8c954400d5\") " pod="calico-system/whisker-6448458988-2gsdl" Dec 12 18:47:04.319274 systemd[1]: Created slice kubepods-besteffort-pod20a2fd47_4a22_4521_b92b_0d8c954400d5.slice - libcontainer container kubepods-besteffort-pod20a2fd47_4a22_4521_b92b_0d8c954400d5.slice. Dec 12 18:47:04.655996 containerd[1990]: time="2025-12-12T18:47:04.655401401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6448458988-2gsdl,Uid:20a2fd47-4a22-4521-b92b-0d8c954400d5,Namespace:calico-system,Attempt:0,}" Dec 12 18:47:06.004237 systemd-networkd[1833]: vxlan.calico: Link UP Dec 12 18:47:06.004248 systemd-networkd[1833]: vxlan.calico: Gained carrier Dec 12 18:47:06.083527 (udev-worker)[4812]: Network interface NamePolicy= disabled on kernel command line. Dec 12 18:47:06.084239 (udev-worker)[4808]: Network interface NamePolicy= disabled on kernel command line. Dec 12 18:47:06.089839 (udev-worker)[4824]: Network interface NamePolicy= disabled on kernel command line. Dec 12 18:47:07.209301 containerd[1990]: time="2025-12-12T18:47:07.209263003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tmgnz,Uid:82756a7a-e0ea-4024-9ee9-49158171866e,Namespace:kube-system,Attempt:0,}" Dec 12 18:47:07.209967 containerd[1990]: time="2025-12-12T18:47:07.209288631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69dcd64969-ztnlv,Uid:e69104d4-3599-4ed4-87b8-edf0ec255633,Namespace:calico-system,Attempt:0,}" Dec 12 18:47:07.490801 systemd-networkd[1833]: vxlan.calico: Gained IPv6LL Dec 12 18:47:07.685043 systemd[1]: Started sshd@9-172.31.25.153:22-139.178.89.65:40650.service - OpenSSH per-connection server daemon (139.178.89.65:40650). Dec 12 18:47:07.922204 sshd[4869]: Accepted publickey for core from 139.178.89.65 port 40650 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:47:07.925155 sshd-session[4869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:47:07.931832 systemd-logind[1966]: New session 10 of user core. Dec 12 18:47:07.940281 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 18:47:08.212472 containerd[1990]: time="2025-12-12T18:47:08.211748406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f58b74bcb-s6q4x,Uid:9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:47:08.214600 containerd[1990]: time="2025-12-12T18:47:08.213515174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8rtgc,Uid:c534bc62-f909-4723-a1ce-dd8a325ef04d,Namespace:calico-system,Attempt:0,}" Dec 12 18:47:08.829976 sshd[4872]: Connection closed by 139.178.89.65 port 40650 Dec 12 18:47:08.830730 sshd-session[4869]: pam_unix(sshd:session): session closed for user core Dec 12 18:47:08.846331 systemd[1]: sshd@9-172.31.25.153:22-139.178.89.65:40650.service: Deactivated successfully. Dec 12 18:47:08.859547 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 18:47:08.862250 systemd-logind[1966]: Session 10 logged out. Waiting for processes to exit. Dec 12 18:47:08.865207 systemd-logind[1966]: Removed session 10. Dec 12 18:47:09.257928 systemd-networkd[1833]: cali25649847639: Link UP Dec 12 18:47:09.258949 systemd-networkd[1833]: cali25649847639: Gained carrier Dec 12 18:47:09.285402 containerd[1990]: 2025-12-12 18:47:04.801 [INFO][4745] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:47:09.285402 containerd[1990]: 2025-12-12 18:47:05.341 [INFO][4745] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--153-k8s-whisker--6448458988--2gsdl-eth0 whisker-6448458988- calico-system 20a2fd47-4a22-4521-b92b-0d8c954400d5 901 0 2025-12-12 18:47:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6448458988 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-25-153 whisker-6448458988-2gsdl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali25649847639 [] [] }} ContainerID="6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b" Namespace="calico-system" Pod="whisker-6448458988-2gsdl" WorkloadEndpoint="ip--172--31--25--153-k8s-whisker--6448458988--2gsdl-" Dec 12 18:47:09.285402 containerd[1990]: 2025-12-12 18:47:05.342 [INFO][4745] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b" Namespace="calico-system" Pod="whisker-6448458988-2gsdl" WorkloadEndpoint="ip--172--31--25--153-k8s-whisker--6448458988--2gsdl-eth0" Dec 12 18:47:09.285402 containerd[1990]: 2025-12-12 18:47:09.014 [INFO][4792] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b" HandleID="k8s-pod-network.6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b" Workload="ip--172--31--25--153-k8s-whisker--6448458988--2gsdl-eth0" Dec 12 18:47:09.285875 containerd[1990]: 2025-12-12 18:47:09.016 [INFO][4792] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b" HandleID="k8s-pod-network.6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b" Workload="ip--172--31--25--153-k8s-whisker--6448458988--2gsdl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000340200), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-153", "pod":"whisker-6448458988-2gsdl", "timestamp":"2025-12-12 18:47:09.014412215 +0000 UTC"}, Hostname:"ip-172-31-25-153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:47:09.285875 containerd[1990]: 2025-12-12 18:47:09.016 [INFO][4792] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:47:09.285875 containerd[1990]: 2025-12-12 18:47:09.017 [INFO][4792] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:47:09.285875 containerd[1990]: 2025-12-12 18:47:09.017 [INFO][4792] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-153' Dec 12 18:47:09.285875 containerd[1990]: 2025-12-12 18:47:09.033 [INFO][4792] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b" host="ip-172-31-25-153" Dec 12 18:47:09.285875 containerd[1990]: 2025-12-12 18:47:09.221 [INFO][4792] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-153" Dec 12 18:47:09.285875 containerd[1990]: 2025-12-12 18:47:09.227 [INFO][4792] ipam/ipam.go 511: Trying affinity for 192.168.61.128/26 host="ip-172-31-25-153" Dec 12 18:47:09.285875 containerd[1990]: 2025-12-12 18:47:09.229 [INFO][4792] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.128/26 host="ip-172-31-25-153" Dec 12 18:47:09.285875 containerd[1990]: 2025-12-12 18:47:09.232 [INFO][4792] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.128/26 host="ip-172-31-25-153" Dec 12 18:47:09.287289 containerd[1990]: 2025-12-12 18:47:09.232 [INFO][4792] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.61.128/26 handle="k8s-pod-network.6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b" host="ip-172-31-25-153" Dec 12 18:47:09.287289 containerd[1990]: 2025-12-12 18:47:09.234 [INFO][4792] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b Dec 12 18:47:09.287289 containerd[1990]: 2025-12-12 18:47:09.240 [INFO][4792] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.61.128/26 handle="k8s-pod-network.6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b" host="ip-172-31-25-153" Dec 12 18:47:09.287289 containerd[1990]: 2025-12-12 18:47:09.246 [INFO][4792] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.61.129/26] block=192.168.61.128/26 handle="k8s-pod-network.6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b" host="ip-172-31-25-153" Dec 12 18:47:09.287289 containerd[1990]: 2025-12-12 18:47:09.246 [INFO][4792] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.129/26] handle="k8s-pod-network.6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b" host="ip-172-31-25-153" Dec 12 18:47:09.287289 containerd[1990]: 2025-12-12 18:47:09.246 [INFO][4792] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:47:09.287289 containerd[1990]: 2025-12-12 18:47:09.246 [INFO][4792] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.61.129/26] IPv6=[] ContainerID="6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b" HandleID="k8s-pod-network.6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b" Workload="ip--172--31--25--153-k8s-whisker--6448458988--2gsdl-eth0" Dec 12 18:47:09.287556 containerd[1990]: 2025-12-12 18:47:09.251 [INFO][4745] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b" Namespace="calico-system" Pod="whisker-6448458988-2gsdl" WorkloadEndpoint="ip--172--31--25--153-k8s-whisker--6448458988--2gsdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--153-k8s-whisker--6448458988--2gsdl-eth0", GenerateName:"whisker-6448458988-", Namespace:"calico-system", SelfLink:"", UID:"20a2fd47-4a22-4521-b92b-0d8c954400d5", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 47, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6448458988", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-153", ContainerID:"", Pod:"whisker-6448458988-2gsdl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.61.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali25649847639", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:47:09.287556 containerd[1990]: 2025-12-12 18:47:09.251 [INFO][4745] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.129/32] ContainerID="6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b" Namespace="calico-system" Pod="whisker-6448458988-2gsdl" WorkloadEndpoint="ip--172--31--25--153-k8s-whisker--6448458988--2gsdl-eth0" Dec 12 18:47:09.287656 containerd[1990]: 2025-12-12 18:47:09.251 [INFO][4745] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25649847639 ContainerID="6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b" Namespace="calico-system" Pod="whisker-6448458988-2gsdl" WorkloadEndpoint="ip--172--31--25--153-k8s-whisker--6448458988--2gsdl-eth0" Dec 12 18:47:09.287656 containerd[1990]: 2025-12-12 18:47:09.260 [INFO][4745] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b" Namespace="calico-system" Pod="whisker-6448458988-2gsdl" WorkloadEndpoint="ip--172--31--25--153-k8s-whisker--6448458988--2gsdl-eth0" Dec 12 18:47:09.287715 containerd[1990]: 2025-12-12 18:47:09.260 [INFO][4745] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b" Namespace="calico-system" Pod="whisker-6448458988-2gsdl" WorkloadEndpoint="ip--172--31--25--153-k8s-whisker--6448458988--2gsdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--153-k8s-whisker--6448458988--2gsdl-eth0", GenerateName:"whisker-6448458988-", Namespace:"calico-system", SelfLink:"", UID:"20a2fd47-4a22-4521-b92b-0d8c954400d5", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 47, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6448458988", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-153", ContainerID:"6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b", Pod:"whisker-6448458988-2gsdl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.61.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali25649847639", MAC:"e2:af:60:8c:87:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:47:09.287769 containerd[1990]: 2025-12-12 18:47:09.282 [INFO][4745] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b" Namespace="calico-system" Pod="whisker-6448458988-2gsdl" WorkloadEndpoint="ip--172--31--25--153-k8s-whisker--6448458988--2gsdl-eth0" Dec 12 18:47:09.780249 systemd-networkd[1833]: cali6208eedc9c2: Link UP Dec 12 18:47:09.784827 systemd-networkd[1833]: cali6208eedc9c2: Gained carrier Dec 12 18:47:09.820544 containerd[1990]: 2025-12-12 18:47:09.463 [INFO][4900] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--153-k8s-coredns--674b8bbfcf--tmgnz-eth0 coredns-674b8bbfcf- kube-system 82756a7a-e0ea-4024-9ee9-49158171866e 821 0 2025-12-12 18:46:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-25-153 coredns-674b8bbfcf-tmgnz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6208eedc9c2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574" Namespace="kube-system" Pod="coredns-674b8bbfcf-tmgnz" WorkloadEndpoint="ip--172--31--25--153-k8s-coredns--674b8bbfcf--tmgnz-" Dec 12 18:47:09.820544 containerd[1990]: 2025-12-12 18:47:09.463 [INFO][4900] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574" Namespace="kube-system" Pod="coredns-674b8bbfcf-tmgnz" WorkloadEndpoint="ip--172--31--25--153-k8s-coredns--674b8bbfcf--tmgnz-eth0" Dec 12 18:47:09.820544 containerd[1990]: 2025-12-12 18:47:09.667 [INFO][4944] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574" HandleID="k8s-pod-network.b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574" Workload="ip--172--31--25--153-k8s-coredns--674b8bbfcf--tmgnz-eth0" Dec 12 18:47:09.822905 containerd[1990]: 2025-12-12 18:47:09.667 [INFO][4944] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574" HandleID="k8s-pod-network.b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574" Workload="ip--172--31--25--153-k8s-coredns--674b8bbfcf--tmgnz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000255750), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-25-153", "pod":"coredns-674b8bbfcf-tmgnz", "timestamp":"2025-12-12 18:47:09.667106799 +0000 UTC"}, Hostname:"ip-172-31-25-153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:47:09.822905 containerd[1990]: 2025-12-12 18:47:09.667 [INFO][4944] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:47:09.822905 containerd[1990]: 2025-12-12 18:47:09.667 [INFO][4944] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:47:09.822905 containerd[1990]: 2025-12-12 18:47:09.667 [INFO][4944] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-153' Dec 12 18:47:09.822905 containerd[1990]: 2025-12-12 18:47:09.692 [INFO][4944] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574" host="ip-172-31-25-153" Dec 12 18:47:09.822905 containerd[1990]: 2025-12-12 18:47:09.720 [INFO][4944] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-153" Dec 12 18:47:09.822905 containerd[1990]: 2025-12-12 18:47:09.730 [INFO][4944] ipam/ipam.go 511: Trying affinity for 192.168.61.128/26 host="ip-172-31-25-153" Dec 12 18:47:09.822905 containerd[1990]: 2025-12-12 18:47:09.733 [INFO][4944] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.128/26 host="ip-172-31-25-153" Dec 12 18:47:09.822905 containerd[1990]: 2025-12-12 18:47:09.737 [INFO][4944] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.128/26 host="ip-172-31-25-153" Dec 12 18:47:09.823356 containerd[1990]: 2025-12-12 18:47:09.737 [INFO][4944] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.61.128/26 handle="k8s-pod-network.b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574" host="ip-172-31-25-153" Dec 12 18:47:09.823356 containerd[1990]: 2025-12-12 18:47:09.739 [INFO][4944] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574 Dec 12 18:47:09.823356 containerd[1990]: 2025-12-12 18:47:09.747 [INFO][4944] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.61.128/26 handle="k8s-pod-network.b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574" host="ip-172-31-25-153" Dec 12 18:47:09.823356 containerd[1990]: 2025-12-12 18:47:09.757 [INFO][4944] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.61.130/26] block=192.168.61.128/26 handle="k8s-pod-network.b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574" host="ip-172-31-25-153" Dec 12 18:47:09.823356 containerd[1990]: 2025-12-12 18:47:09.757 [INFO][4944] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.130/26] handle="k8s-pod-network.b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574" host="ip-172-31-25-153" Dec 12 18:47:09.823356 containerd[1990]: 2025-12-12 18:47:09.757 [INFO][4944] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:47:09.823356 containerd[1990]: 2025-12-12 18:47:09.757 [INFO][4944] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.61.130/26] IPv6=[] ContainerID="b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574" HandleID="k8s-pod-network.b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574" Workload="ip--172--31--25--153-k8s-coredns--674b8bbfcf--tmgnz-eth0" Dec 12 18:47:09.823632 containerd[1990]: 2025-12-12 18:47:09.774 [INFO][4900] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574" Namespace="kube-system" Pod="coredns-674b8bbfcf-tmgnz" WorkloadEndpoint="ip--172--31--25--153-k8s-coredns--674b8bbfcf--tmgnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--153-k8s-coredns--674b8bbfcf--tmgnz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"82756a7a-e0ea-4024-9ee9-49158171866e", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 46, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-153", ContainerID:"", Pod:"coredns-674b8bbfcf-tmgnz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6208eedc9c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:47:09.823632 containerd[1990]: 2025-12-12 18:47:09.775 [INFO][4900] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.130/32] ContainerID="b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574" Namespace="kube-system" Pod="coredns-674b8bbfcf-tmgnz" WorkloadEndpoint="ip--172--31--25--153-k8s-coredns--674b8bbfcf--tmgnz-eth0" Dec 12 18:47:09.823632 containerd[1990]: 2025-12-12 18:47:09.775 [INFO][4900] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6208eedc9c2 ContainerID="b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574" Namespace="kube-system" Pod="coredns-674b8bbfcf-tmgnz" WorkloadEndpoint="ip--172--31--25--153-k8s-coredns--674b8bbfcf--tmgnz-eth0" Dec 12 18:47:09.823632 containerd[1990]: 2025-12-12 18:47:09.781 [INFO][4900] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574" Namespace="kube-system" Pod="coredns-674b8bbfcf-tmgnz" WorkloadEndpoint="ip--172--31--25--153-k8s-coredns--674b8bbfcf--tmgnz-eth0" Dec 12 18:47:09.823632 containerd[1990]: 2025-12-12 18:47:09.784 [INFO][4900] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574" Namespace="kube-system" Pod="coredns-674b8bbfcf-tmgnz" WorkloadEndpoint="ip--172--31--25--153-k8s-coredns--674b8bbfcf--tmgnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--153-k8s-coredns--674b8bbfcf--tmgnz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"82756a7a-e0ea-4024-9ee9-49158171866e", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 46, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-153", ContainerID:"b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574", Pod:"coredns-674b8bbfcf-tmgnz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6208eedc9c2", MAC:"de:79:78:9a:8d:f0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:47:09.823632 containerd[1990]: 2025-12-12 18:47:09.804 [INFO][4900] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574" Namespace="kube-system" Pod="coredns-674b8bbfcf-tmgnz" WorkloadEndpoint="ip--172--31--25--153-k8s-coredns--674b8bbfcf--tmgnz-eth0" Dec 12 18:47:09.872010 containerd[1990]: time="2025-12-12T18:47:09.871198057Z" level=info msg="connecting to shim 6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b" address="unix:///run/containerd/s/243ba5c77e1320b8d366903c74d9c2762ef22112fed22e40e85507cd7c095f00" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:47:09.915015 containerd[1990]: time="2025-12-12T18:47:09.914328450Z" level=info msg="connecting to shim b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574" address="unix:///run/containerd/s/6599767b26ef60be4b1a6b39e65c244b9b9311c74868ea0b502a96ce246a7d88" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:47:09.976758 systemd-networkd[1833]: calieb6cb3c02fa: Link UP Dec 12 18:47:09.979586 systemd-networkd[1833]: calieb6cb3c02fa: Gained carrier Dec 12 18:47:09.990374 systemd[1]: Started cri-containerd-6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b.scope - libcontainer container 6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b. Dec 12 18:47:10.031614 systemd[1]: Started cri-containerd-b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574.scope - libcontainer container b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574. Dec 12 18:47:10.049283 containerd[1990]: 2025-12-12 18:47:09.540 [INFO][4909] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--s6q4x-eth0 calico-apiserver-6f58b74bcb- calico-apiserver 9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d 824 0 2025-12-12 18:46:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f58b74bcb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-25-153 calico-apiserver-6f58b74bcb-s6q4x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieb6cb3c02fa [] [] }} ContainerID="20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6" Namespace="calico-apiserver" Pod="calico-apiserver-6f58b74bcb-s6q4x" WorkloadEndpoint="ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--s6q4x-" Dec 12 18:47:10.049283 containerd[1990]: 2025-12-12 18:47:09.543 [INFO][4909] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6" Namespace="calico-apiserver" Pod="calico-apiserver-6f58b74bcb-s6q4x" WorkloadEndpoint="ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--s6q4x-eth0" Dec 12 18:47:10.049283 containerd[1990]: 2025-12-12 18:47:09.701 [INFO][4962] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6" HandleID="k8s-pod-network.20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6" Workload="ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--s6q4x-eth0" Dec 12 18:47:10.049283 containerd[1990]: 2025-12-12 18:47:09.701 [INFO][4962] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6" HandleID="k8s-pod-network.20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6" Workload="ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--s6q4x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5660), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-25-153", "pod":"calico-apiserver-6f58b74bcb-s6q4x", "timestamp":"2025-12-12 18:47:09.701231025 +0000 UTC"}, Hostname:"ip-172-31-25-153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:47:10.049283 containerd[1990]: 2025-12-12 18:47:09.701 [INFO][4962] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:47:10.049283 containerd[1990]: 2025-12-12 18:47:09.757 [INFO][4962] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:47:10.049283 containerd[1990]: 2025-12-12 18:47:09.758 [INFO][4962] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-153' Dec 12 18:47:10.049283 containerd[1990]: 2025-12-12 18:47:09.787 [INFO][4962] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6" host="ip-172-31-25-153" Dec 12 18:47:10.049283 containerd[1990]: 2025-12-12 18:47:09.821 [INFO][4962] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-153" Dec 12 18:47:10.049283 containerd[1990]: 2025-12-12 18:47:09.849 [INFO][4962] ipam/ipam.go 511: Trying affinity for 192.168.61.128/26 host="ip-172-31-25-153" Dec 12 18:47:10.049283 containerd[1990]: 2025-12-12 18:47:09.856 [INFO][4962] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.128/26 host="ip-172-31-25-153" Dec 12 18:47:10.049283 containerd[1990]: 2025-12-12 18:47:09.868 [INFO][4962] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.128/26 host="ip-172-31-25-153" Dec 12 18:47:10.049283 containerd[1990]: 2025-12-12 18:47:09.868 [INFO][4962] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.61.128/26 handle="k8s-pod-network.20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6" host="ip-172-31-25-153" Dec 12 18:47:10.049283 containerd[1990]: 2025-12-12 18:47:09.874 [INFO][4962] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6 Dec 12 18:47:10.049283 containerd[1990]: 2025-12-12 18:47:09.907 [INFO][4962] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.61.128/26 handle="k8s-pod-network.20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6" host="ip-172-31-25-153" Dec 12 18:47:10.049283 containerd[1990]: 2025-12-12 18:47:09.929 [INFO][4962] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.61.131/26] block=192.168.61.128/26 handle="k8s-pod-network.20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6" host="ip-172-31-25-153" Dec 12 18:47:10.049283 containerd[1990]: 2025-12-12 18:47:09.933 [INFO][4962] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.131/26] handle="k8s-pod-network.20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6" host="ip-172-31-25-153" Dec 12 18:47:10.049283 containerd[1990]: 2025-12-12 18:47:09.933 [INFO][4962] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:47:10.049283 containerd[1990]: 2025-12-12 18:47:09.933 [INFO][4962] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.61.131/26] IPv6=[] ContainerID="20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6" HandleID="k8s-pod-network.20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6" Workload="ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--s6q4x-eth0" Dec 12 18:47:10.054082 containerd[1990]: 2025-12-12 18:47:09.950 [INFO][4909] cni-plugin/k8s.go 418: Populated endpoint ContainerID="20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6" Namespace="calico-apiserver" Pod="calico-apiserver-6f58b74bcb-s6q4x" WorkloadEndpoint="ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--s6q4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--s6q4x-eth0", GenerateName:"calico-apiserver-6f58b74bcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 46, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f58b74bcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-153", ContainerID:"", Pod:"calico-apiserver-6f58b74bcb-s6q4x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb6cb3c02fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:47:10.054082 containerd[1990]: 2025-12-12 18:47:09.950 [INFO][4909] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.131/32] ContainerID="20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6" Namespace="calico-apiserver" Pod="calico-apiserver-6f58b74bcb-s6q4x" WorkloadEndpoint="ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--s6q4x-eth0" Dec 12 18:47:10.054082 containerd[1990]: 2025-12-12 18:47:09.950 [INFO][4909] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb6cb3c02fa ContainerID="20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6" Namespace="calico-apiserver" Pod="calico-apiserver-6f58b74bcb-s6q4x" WorkloadEndpoint="ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--s6q4x-eth0" Dec 12 18:47:10.054082 containerd[1990]: 2025-12-12 18:47:09.994 [INFO][4909] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6" Namespace="calico-apiserver" Pod="calico-apiserver-6f58b74bcb-s6q4x" WorkloadEndpoint="ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--s6q4x-eth0" Dec 12 18:47:10.054082 containerd[1990]: 2025-12-12 18:47:10.010 [INFO][4909] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6" Namespace="calico-apiserver" Pod="calico-apiserver-6f58b74bcb-s6q4x" WorkloadEndpoint="ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--s6q4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--s6q4x-eth0", GenerateName:"calico-apiserver-6f58b74bcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 46, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f58b74bcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-153", ContainerID:"20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6", Pod:"calico-apiserver-6f58b74bcb-s6q4x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb6cb3c02fa", MAC:"02:68:54:ac:68:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:47:10.054082 containerd[1990]: 2025-12-12 18:47:10.038 [INFO][4909] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6" Namespace="calico-apiserver" Pod="calico-apiserver-6f58b74bcb-s6q4x" WorkloadEndpoint="ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--s6q4x-eth0" Dec 12 18:47:10.146673 containerd[1990]: time="2025-12-12T18:47:10.146597473Z" level=info msg="connecting to shim 20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6" address="unix:///run/containerd/s/af46d4c5598452bf1f01a56aba747071a3ba3c5768605e83d5dbc6f477ac24e6" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:47:10.198552 systemd-networkd[1833]: cali51e97aa6938: Link UP Dec 12 18:47:10.200722 systemd-networkd[1833]: cali51e97aa6938: Gained carrier Dec 12 18:47:10.230397 containerd[1990]: time="2025-12-12T18:47:10.226433906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-sjdx8,Uid:627e8918-ce59-4b1e-a58e-99fb7e0005f5,Namespace:calico-system,Attempt:0,}" Dec 12 18:47:10.241476 containerd[1990]: time="2025-12-12T18:47:10.241427935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f58b74bcb-ql2z4,Uid:ba2f2d53-b502-4a41-a1a8-fae69661a05c,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:47:10.263979 containerd[1990]: time="2025-12-12T18:47:10.263924373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gc8gj,Uid:1f4fc72d-306f-401c-8038-da87f142a57b,Namespace:kube-system,Attempt:0,}" Dec 12 18:47:10.286537 systemd[1]: Started cri-containerd-20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6.scope - libcontainer container 20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6. Dec 12 18:47:10.324579 containerd[1990]: 2025-12-12 18:47:09.521 [INFO][4910] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--153-k8s-csi--node--driver--8rtgc-eth0 csi-node-driver- calico-system c534bc62-f909-4723-a1ce-dd8a325ef04d 709 0 2025-12-12 18:46:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-25-153 csi-node-driver-8rtgc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali51e97aa6938 [] [] }} ContainerID="5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926" Namespace="calico-system" Pod="csi-node-driver-8rtgc" WorkloadEndpoint="ip--172--31--25--153-k8s-csi--node--driver--8rtgc-" Dec 12 18:47:10.324579 containerd[1990]: 2025-12-12 18:47:09.521 [INFO][4910] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926" Namespace="calico-system" Pod="csi-node-driver-8rtgc" WorkloadEndpoint="ip--172--31--25--153-k8s-csi--node--driver--8rtgc-eth0" Dec 12 18:47:10.324579 containerd[1990]: 2025-12-12 18:47:09.709 [INFO][4955] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926" HandleID="k8s-pod-network.5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926" Workload="ip--172--31--25--153-k8s-csi--node--driver--8rtgc-eth0" Dec 12 18:47:10.324579 containerd[1990]: 2025-12-12 18:47:09.709 [INFO][4955] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926" HandleID="k8s-pod-network.5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926" Workload="ip--172--31--25--153-k8s-csi--node--driver--8rtgc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000324570), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-153", "pod":"csi-node-driver-8rtgc", "timestamp":"2025-12-12 18:47:09.709513739 +0000 UTC"}, Hostname:"ip-172-31-25-153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:47:10.324579 containerd[1990]: 2025-12-12 18:47:09.709 [INFO][4955] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:47:10.324579 containerd[1990]: 2025-12-12 18:47:09.934 [INFO][4955] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:47:10.324579 containerd[1990]: 2025-12-12 18:47:09.934 [INFO][4955] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-153' Dec 12 18:47:10.324579 containerd[1990]: 2025-12-12 18:47:09.956 [INFO][4955] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926" host="ip-172-31-25-153" Dec 12 18:47:10.324579 containerd[1990]: 2025-12-12 18:47:09.982 [INFO][4955] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-153" Dec 12 18:47:10.324579 containerd[1990]: 2025-12-12 18:47:10.030 [INFO][4955] ipam/ipam.go 511: Trying affinity for 192.168.61.128/26 host="ip-172-31-25-153" Dec 12 18:47:10.324579 containerd[1990]: 2025-12-12 18:47:10.058 [INFO][4955] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.128/26 host="ip-172-31-25-153" Dec 12 18:47:10.324579 containerd[1990]: 2025-12-12 18:47:10.070 [INFO][4955] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.128/26 host="ip-172-31-25-153" Dec 12 18:47:10.324579 containerd[1990]: 2025-12-12 18:47:10.070 [INFO][4955] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.61.128/26 handle="k8s-pod-network.5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926" host="ip-172-31-25-153" Dec 12 18:47:10.324579 containerd[1990]: 2025-12-12 18:47:10.090 [INFO][4955] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926 Dec 12 18:47:10.324579 containerd[1990]: 2025-12-12 18:47:10.121 [INFO][4955] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.61.128/26 handle="k8s-pod-network.5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926" host="ip-172-31-25-153" Dec 12 18:47:10.324579 containerd[1990]: 2025-12-12 18:47:10.159 [INFO][4955] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.61.132/26] block=192.168.61.128/26 handle="k8s-pod-network.5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926" host="ip-172-31-25-153" Dec 12 18:47:10.324579 containerd[1990]: 2025-12-12 18:47:10.159 [INFO][4955] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.132/26] handle="k8s-pod-network.5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926" host="ip-172-31-25-153" Dec 12 18:47:10.324579 containerd[1990]: 2025-12-12 18:47:10.159 [INFO][4955] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:47:10.324579 containerd[1990]: 2025-12-12 18:47:10.159 [INFO][4955] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.61.132/26] IPv6=[] ContainerID="5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926" HandleID="k8s-pod-network.5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926" Workload="ip--172--31--25--153-k8s-csi--node--driver--8rtgc-eth0" Dec 12 18:47:10.328871 containerd[1990]: 2025-12-12 18:47:10.184 [INFO][4910] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926" Namespace="calico-system" Pod="csi-node-driver-8rtgc" WorkloadEndpoint="ip--172--31--25--153-k8s-csi--node--driver--8rtgc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--153-k8s-csi--node--driver--8rtgc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c534bc62-f909-4723-a1ce-dd8a325ef04d", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 46, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-153", ContainerID:"", Pod:"csi-node-driver-8rtgc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali51e97aa6938", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:47:10.328871 containerd[1990]: 2025-12-12 18:47:10.186 [INFO][4910] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.132/32] ContainerID="5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926" Namespace="calico-system" Pod="csi-node-driver-8rtgc" WorkloadEndpoint="ip--172--31--25--153-k8s-csi--node--driver--8rtgc-eth0" Dec 12 18:47:10.328871 containerd[1990]: 2025-12-12 18:47:10.186 [INFO][4910] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali51e97aa6938 ContainerID="5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926" Namespace="calico-system" Pod="csi-node-driver-8rtgc" WorkloadEndpoint="ip--172--31--25--153-k8s-csi--node--driver--8rtgc-eth0" Dec 12 18:47:10.328871 containerd[1990]: 2025-12-12 18:47:10.243 [INFO][4910] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926" Namespace="calico-system" Pod="csi-node-driver-8rtgc" WorkloadEndpoint="ip--172--31--25--153-k8s-csi--node--driver--8rtgc-eth0" Dec 12 18:47:10.328871 containerd[1990]: 2025-12-12 18:47:10.253 [INFO][4910] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926" Namespace="calico-system" Pod="csi-node-driver-8rtgc" WorkloadEndpoint="ip--172--31--25--153-k8s-csi--node--driver--8rtgc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--153-k8s-csi--node--driver--8rtgc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c534bc62-f909-4723-a1ce-dd8a325ef04d", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 46, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-153", ContainerID:"5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926", Pod:"csi-node-driver-8rtgc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali51e97aa6938", MAC:"a6:e5:73:e0:5f:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:47:10.328871 containerd[1990]: 2025-12-12 18:47:10.298 [INFO][4910] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926" Namespace="calico-system" Pod="csi-node-driver-8rtgc" WorkloadEndpoint="ip--172--31--25--153-k8s-csi--node--driver--8rtgc-eth0" Dec 12 18:47:10.445418 systemd-networkd[1833]: cali78d8a7e22b9: Link UP Dec 12 18:47:10.454694 systemd-networkd[1833]: cali78d8a7e22b9: Gained carrier Dec 12 18:47:10.591229 containerd[1990]: time="2025-12-12T18:47:10.590028205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tmgnz,Uid:82756a7a-e0ea-4024-9ee9-49158171866e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574\"" Dec 12 18:47:10.594279 containerd[1990]: 2025-12-12 18:47:09.537 [INFO][4905] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--153-k8s-calico--kube--controllers--69dcd64969--ztnlv-eth0 calico-kube-controllers-69dcd64969- calico-system e69104d4-3599-4ed4-87b8-edf0ec255633 819 0 2025-12-12 18:46:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69dcd64969 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-25-153 calico-kube-controllers-69dcd64969-ztnlv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali78d8a7e22b9 [] [] }} ContainerID="bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3" Namespace="calico-system" Pod="calico-kube-controllers-69dcd64969-ztnlv" WorkloadEndpoint="ip--172--31--25--153-k8s-calico--kube--controllers--69dcd64969--ztnlv-" Dec 12 18:47:10.594279 containerd[1990]: 2025-12-12 18:47:09.537 [INFO][4905] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3" Namespace="calico-system" Pod="calico-kube-controllers-69dcd64969-ztnlv" WorkloadEndpoint="ip--172--31--25--153-k8s-calico--kube--controllers--69dcd64969--ztnlv-eth0" Dec 12 18:47:10.594279 containerd[1990]: 2025-12-12 18:47:09.717 [INFO][4958] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3" HandleID="k8s-pod-network.bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3" Workload="ip--172--31--25--153-k8s-calico--kube--controllers--69dcd64969--ztnlv-eth0" Dec 12 18:47:10.594279 containerd[1990]: 2025-12-12 18:47:09.717 [INFO][4958] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3" HandleID="k8s-pod-network.bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3" Workload="ip--172--31--25--153-k8s-calico--kube--controllers--69dcd64969--ztnlv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035fa20), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-153", "pod":"calico-kube-controllers-69dcd64969-ztnlv", "timestamp":"2025-12-12 18:47:09.717171727 +0000 UTC"}, Hostname:"ip-172-31-25-153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:47:10.594279 containerd[1990]: 2025-12-12 18:47:09.717 [INFO][4958] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:47:10.594279 containerd[1990]: 2025-12-12 18:47:10.160 [INFO][4958] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:47:10.594279 containerd[1990]: 2025-12-12 18:47:10.160 [INFO][4958] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-153' Dec 12 18:47:10.594279 containerd[1990]: 2025-12-12 18:47:10.204 [INFO][4958] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3" host="ip-172-31-25-153" Dec 12 18:47:10.594279 containerd[1990]: 2025-12-12 18:47:10.250 [INFO][4958] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-153" Dec 12 18:47:10.594279 containerd[1990]: 2025-12-12 18:47:10.318 [INFO][4958] ipam/ipam.go 511: Trying affinity for 192.168.61.128/26 host="ip-172-31-25-153" Dec 12 18:47:10.594279 containerd[1990]: 2025-12-12 18:47:10.335 [INFO][4958] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.128/26 host="ip-172-31-25-153" Dec 12 18:47:10.594279 containerd[1990]: 2025-12-12 18:47:10.346 [INFO][4958] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.128/26 host="ip-172-31-25-153" Dec 12 18:47:10.594279 containerd[1990]: 2025-12-12 18:47:10.346 [INFO][4958] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.61.128/26 handle="k8s-pod-network.bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3" host="ip-172-31-25-153" Dec 12 18:47:10.594279 containerd[1990]: 2025-12-12 18:47:10.356 [INFO][4958] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3 Dec 12 18:47:10.594279 containerd[1990]: 2025-12-12 18:47:10.377 [INFO][4958] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.61.128/26 handle="k8s-pod-network.bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3" host="ip-172-31-25-153" Dec 12 18:47:10.594279 containerd[1990]: 2025-12-12 18:47:10.400 [INFO][4958] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.61.133/26] block=192.168.61.128/26 handle="k8s-pod-network.bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3" host="ip-172-31-25-153" Dec 12 18:47:10.594279 containerd[1990]: 2025-12-12 18:47:10.400 [INFO][4958] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.133/26] handle="k8s-pod-network.bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3" host="ip-172-31-25-153" Dec 12 18:47:10.594279 containerd[1990]: 2025-12-12 18:47:10.401 [INFO][4958] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:47:10.594279 containerd[1990]: 2025-12-12 18:47:10.402 [INFO][4958] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.61.133/26] IPv6=[] ContainerID="bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3" HandleID="k8s-pod-network.bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3" Workload="ip--172--31--25--153-k8s-calico--kube--controllers--69dcd64969--ztnlv-eth0" Dec 12 18:47:10.597210 containerd[1990]: 2025-12-12 18:47:10.425 [INFO][4905] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3" Namespace="calico-system" Pod="calico-kube-controllers-69dcd64969-ztnlv" WorkloadEndpoint="ip--172--31--25--153-k8s-calico--kube--controllers--69dcd64969--ztnlv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--153-k8s-calico--kube--controllers--69dcd64969--ztnlv-eth0", GenerateName:"calico-kube-controllers-69dcd64969-", Namespace:"calico-system", SelfLink:"", UID:"e69104d4-3599-4ed4-87b8-edf0ec255633", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 46, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69dcd64969", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-153", ContainerID:"", Pod:"calico-kube-controllers-69dcd64969-ztnlv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali78d8a7e22b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:47:10.597210 containerd[1990]: 2025-12-12 18:47:10.426 [INFO][4905] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.133/32] ContainerID="bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3" Namespace="calico-system" Pod="calico-kube-controllers-69dcd64969-ztnlv" WorkloadEndpoint="ip--172--31--25--153-k8s-calico--kube--controllers--69dcd64969--ztnlv-eth0" Dec 12 18:47:10.597210 containerd[1990]: 2025-12-12 18:47:10.426 [INFO][4905] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali78d8a7e22b9 ContainerID="bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3" Namespace="calico-system" Pod="calico-kube-controllers-69dcd64969-ztnlv" WorkloadEndpoint="ip--172--31--25--153-k8s-calico--kube--controllers--69dcd64969--ztnlv-eth0" Dec 12 18:47:10.597210 containerd[1990]: 2025-12-12 18:47:10.478 [INFO][4905] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3" Namespace="calico-system" Pod="calico-kube-controllers-69dcd64969-ztnlv" WorkloadEndpoint="ip--172--31--25--153-k8s-calico--kube--controllers--69dcd64969--ztnlv-eth0" Dec 12 18:47:10.597210 containerd[1990]: 2025-12-12 18:47:10.491 [INFO][4905] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3" Namespace="calico-system" Pod="calico-kube-controllers-69dcd64969-ztnlv" WorkloadEndpoint="ip--172--31--25--153-k8s-calico--kube--controllers--69dcd64969--ztnlv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--153-k8s-calico--kube--controllers--69dcd64969--ztnlv-eth0", GenerateName:"calico-kube-controllers-69dcd64969-", Namespace:"calico-system", SelfLink:"", UID:"e69104d4-3599-4ed4-87b8-edf0ec255633", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 46, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69dcd64969", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-153", ContainerID:"bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3", Pod:"calico-kube-controllers-69dcd64969-ztnlv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali78d8a7e22b9", MAC:"96:a2:1f:4f:18:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:47:10.597210 containerd[1990]: 2025-12-12 18:47:10.540 [INFO][4905] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3" Namespace="calico-system" Pod="calico-kube-controllers-69dcd64969-ztnlv" WorkloadEndpoint="ip--172--31--25--153-k8s-calico--kube--controllers--69dcd64969--ztnlv-eth0" Dec 12 18:47:10.668021 containerd[1990]: time="2025-12-12T18:47:10.667396548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6448458988-2gsdl,Uid:20a2fd47-4a22-4521-b92b-0d8c954400d5,Namespace:calico-system,Attempt:0,} returns sandbox id \"6f8642d7c24895aaad192a2491ba3a8664e91bf668ac5ed5d8081647bceb1f5b\"" Dec 12 18:47:10.728421 containerd[1990]: time="2025-12-12T18:47:10.728365014Z" level=info msg="connecting to shim 5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926" address="unix:///run/containerd/s/755f17528927e22ac837720a700d33821015e3c47c08db1a8052bd7883f5627c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:47:10.747799 containerd[1990]: time="2025-12-12T18:47:10.747743636Z" level=info msg="CreateContainer within sandbox \"b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:47:10.780525 containerd[1990]: time="2025-12-12T18:47:10.780328075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:47:10.884610 containerd[1990]: time="2025-12-12T18:47:10.884252335Z" level=info msg="connecting to shim bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3" address="unix:///run/containerd/s/468c2e3b64d8915e55a03569b93ebf7b70aa351f92e376826202d02a82d78339" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:47:10.921555 systemd[1]: Started cri-containerd-5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926.scope - libcontainer container 5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926. Dec 12 18:47:10.950967 containerd[1990]: time="2025-12-12T18:47:10.950851500Z" level=info msg="Container 4af72971deeee05e4647f1d4a0cfcb3c1dedfa7a3e1dc25f662a52522aa6a196: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:47:11.002789 containerd[1990]: time="2025-12-12T18:47:11.002744743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f58b74bcb-s6q4x,Uid:9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"20dea9f908f5f43ebac9857b09c89633e6b4552116f444417b844443cb722ed6\"" Dec 12 18:47:11.008056 containerd[1990]: time="2025-12-12T18:47:11.007982473Z" level=info msg="CreateContainer within sandbox \"b80f922029fbf6711103f4724efca776d109c724e73a4ba20b0d3325f14a6574\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4af72971deeee05e4647f1d4a0cfcb3c1dedfa7a3e1dc25f662a52522aa6a196\"" Dec 12 18:47:11.010805 containerd[1990]: time="2025-12-12T18:47:11.010001877Z" level=info msg="StartContainer for \"4af72971deeee05e4647f1d4a0cfcb3c1dedfa7a3e1dc25f662a52522aa6a196\"" Dec 12 18:47:11.012857 containerd[1990]: time="2025-12-12T18:47:11.011991242Z" level=info msg="connecting to shim 4af72971deeee05e4647f1d4a0cfcb3c1dedfa7a3e1dc25f662a52522aa6a196" address="unix:///run/containerd/s/6599767b26ef60be4b1a6b39e65c244b9b9311c74868ea0b502a96ce246a7d88" protocol=ttrpc version=3 Dec 12 18:47:11.053227 containerd[1990]: time="2025-12-12T18:47:11.053184146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8rtgc,Uid:c534bc62-f909-4723-a1ce-dd8a325ef04d,Namespace:calico-system,Attempt:0,} returns sandbox id \"5a4fc7324e4f1aeac3863b57cc86ca926b1967acdfb121c42e8d6db14d1f1926\"" Dec 12 18:47:11.063494 systemd-networkd[1833]: cali9f555624c77: Link UP Dec 12 18:47:11.065502 systemd-networkd[1833]: cali9f555624c77: Gained carrier Dec 12 18:47:11.075099 systemd-networkd[1833]: calieb6cb3c02fa: Gained IPv6LL Dec 12 18:47:11.083745 systemd[1]: Started cri-containerd-bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3.scope - libcontainer container bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3. Dec 12 18:47:11.127459 systemd[1]: Started cri-containerd-4af72971deeee05e4647f1d4a0cfcb3c1dedfa7a3e1dc25f662a52522aa6a196.scope - libcontainer container 4af72971deeee05e4647f1d4a0cfcb3c1dedfa7a3e1dc25f662a52522aa6a196. Dec 12 18:47:11.130876 containerd[1990]: 2025-12-12 18:47:10.573 [INFO][5109] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--153-k8s-goldmane--666569f655--sjdx8-eth0 goldmane-666569f655- calico-system 627e8918-ce59-4b1e-a58e-99fb7e0005f5 823 0 2025-12-12 18:46:38 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-25-153 goldmane-666569f655-sjdx8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9f555624c77 [] [] }} ContainerID="7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590" Namespace="calico-system" Pod="goldmane-666569f655-sjdx8" WorkloadEndpoint="ip--172--31--25--153-k8s-goldmane--666569f655--sjdx8-" Dec 12 18:47:11.130876 containerd[1990]: 2025-12-12 18:47:10.581 [INFO][5109] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590" Namespace="calico-system" Pod="goldmane-666569f655-sjdx8" WorkloadEndpoint="ip--172--31--25--153-k8s-goldmane--666569f655--sjdx8-eth0" Dec 12 18:47:11.130876 containerd[1990]: 2025-12-12 18:47:10.828 [INFO][5167] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590" HandleID="k8s-pod-network.7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590" Workload="ip--172--31--25--153-k8s-goldmane--666569f655--sjdx8-eth0" Dec 12 18:47:11.130876 containerd[1990]: 2025-12-12 18:47:10.830 [INFO][5167] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590" HandleID="k8s-pod-network.7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590" Workload="ip--172--31--25--153-k8s-goldmane--666569f655--sjdx8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006024d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-153", "pod":"goldmane-666569f655-sjdx8", "timestamp":"2025-12-12 18:47:10.82818897 +0000 UTC"}, Hostname:"ip-172-31-25-153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:47:11.130876 containerd[1990]: 2025-12-12 18:47:10.830 [INFO][5167] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:47:11.130876 containerd[1990]: 2025-12-12 18:47:10.830 [INFO][5167] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:47:11.130876 containerd[1990]: 2025-12-12 18:47:10.830 [INFO][5167] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-153' Dec 12 18:47:11.130876 containerd[1990]: 2025-12-12 18:47:10.880 [INFO][5167] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590" host="ip-172-31-25-153" Dec 12 18:47:11.130876 containerd[1990]: 2025-12-12 18:47:10.919 [INFO][5167] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-153" Dec 12 18:47:11.130876 containerd[1990]: 2025-12-12 18:47:10.934 [INFO][5167] ipam/ipam.go 511: Trying affinity for 192.168.61.128/26 host="ip-172-31-25-153" Dec 12 18:47:11.130876 containerd[1990]: 2025-12-12 18:47:10.943 [INFO][5167] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.128/26 host="ip-172-31-25-153" Dec 12 18:47:11.130876 containerd[1990]: 2025-12-12 18:47:10.950 [INFO][5167] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.128/26 host="ip-172-31-25-153" Dec 12 18:47:11.130876 containerd[1990]: 2025-12-12 18:47:10.950 [INFO][5167] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.61.128/26 handle="k8s-pod-network.7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590" host="ip-172-31-25-153" Dec 12 18:47:11.130876 containerd[1990]: 2025-12-12 18:47:10.955 [INFO][5167] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590 Dec 12 18:47:11.130876 containerd[1990]: 2025-12-12 18:47:10.971 [INFO][5167] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.61.128/26 handle="k8s-pod-network.7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590" host="ip-172-31-25-153" Dec 12 18:47:11.130876 containerd[1990]: 2025-12-12 18:47:10.995 [INFO][5167] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.61.134/26] block=192.168.61.128/26 handle="k8s-pod-network.7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590" host="ip-172-31-25-153" Dec 12 18:47:11.130876 containerd[1990]: 2025-12-12 18:47:10.995 [INFO][5167] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.134/26] handle="k8s-pod-network.7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590" host="ip-172-31-25-153" Dec 12 18:47:11.130876 containerd[1990]: 2025-12-12 18:47:10.995 [INFO][5167] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:47:11.130876 containerd[1990]: 2025-12-12 18:47:10.995 [INFO][5167] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.61.134/26] IPv6=[] ContainerID="7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590" HandleID="k8s-pod-network.7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590" Workload="ip--172--31--25--153-k8s-goldmane--666569f655--sjdx8-eth0" Dec 12 18:47:11.134330 containerd[1990]: 2025-12-12 18:47:11.023 [INFO][5109] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590" Namespace="calico-system" Pod="goldmane-666569f655-sjdx8" WorkloadEndpoint="ip--172--31--25--153-k8s-goldmane--666569f655--sjdx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--153-k8s-goldmane--666569f655--sjdx8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"627e8918-ce59-4b1e-a58e-99fb7e0005f5", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 46, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-153", ContainerID:"", Pod:"goldmane-666569f655-sjdx8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.61.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9f555624c77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:47:11.134330 containerd[1990]: 2025-12-12 18:47:11.023 [INFO][5109] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.134/32] ContainerID="7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590" Namespace="calico-system" Pod="goldmane-666569f655-sjdx8" WorkloadEndpoint="ip--172--31--25--153-k8s-goldmane--666569f655--sjdx8-eth0" Dec 12 18:47:11.134330 containerd[1990]: 2025-12-12 18:47:11.023 [INFO][5109] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f555624c77 ContainerID="7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590" Namespace="calico-system" Pod="goldmane-666569f655-sjdx8" WorkloadEndpoint="ip--172--31--25--153-k8s-goldmane--666569f655--sjdx8-eth0" Dec 12 18:47:11.134330 containerd[1990]: 2025-12-12 18:47:11.078 [INFO][5109] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590" Namespace="calico-system" Pod="goldmane-666569f655-sjdx8" WorkloadEndpoint="ip--172--31--25--153-k8s-goldmane--666569f655--sjdx8-eth0" Dec 12 18:47:11.134330 containerd[1990]: 2025-12-12 18:47:11.089 [INFO][5109] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590" Namespace="calico-system" Pod="goldmane-666569f655-sjdx8" WorkloadEndpoint="ip--172--31--25--153-k8s-goldmane--666569f655--sjdx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--153-k8s-goldmane--666569f655--sjdx8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"627e8918-ce59-4b1e-a58e-99fb7e0005f5", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 46, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-153", ContainerID:"7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590", Pod:"goldmane-666569f655-sjdx8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.61.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9f555624c77", MAC:"aa:a2:4b:72:f9:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:47:11.134330 containerd[1990]: 2025-12-12 18:47:11.121 [INFO][5109] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590" Namespace="calico-system" Pod="goldmane-666569f655-sjdx8" WorkloadEndpoint="ip--172--31--25--153-k8s-goldmane--666569f655--sjdx8-eth0" Dec 12 18:47:11.183592 systemd-networkd[1833]: cali777530f0212: Link UP Dec 12 18:47:11.183858 systemd-networkd[1833]: cali777530f0212: Gained carrier Dec 12 18:47:11.224972 containerd[1990]: 2025-12-12 18:47:10.662 [INFO][5106] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--ql2z4-eth0 calico-apiserver-6f58b74bcb- calico-apiserver ba2f2d53-b502-4a41-a1a8-fae69661a05c 822 0 2025-12-12 18:46:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f58b74bcb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-25-153 calico-apiserver-6f58b74bcb-ql2z4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali777530f0212 [] [] }} ContainerID="8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb" Namespace="calico-apiserver" Pod="calico-apiserver-6f58b74bcb-ql2z4" WorkloadEndpoint="ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--ql2z4-" Dec 12 18:47:11.224972 containerd[1990]: 2025-12-12 18:47:10.665 [INFO][5106] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb" Namespace="calico-apiserver" Pod="calico-apiserver-6f58b74bcb-ql2z4" WorkloadEndpoint="ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--ql2z4-eth0" Dec 12 18:47:11.224972 containerd[1990]: 2025-12-12 18:47:10.931 [INFO][5182] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb" HandleID="k8s-pod-network.8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb" Workload="ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--ql2z4-eth0" Dec 12 18:47:11.224972 containerd[1990]: 2025-12-12 18:47:10.932 [INFO][5182] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb" HandleID="k8s-pod-network.8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb" Workload="ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--ql2z4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000125840), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-25-153", "pod":"calico-apiserver-6f58b74bcb-ql2z4", "timestamp":"2025-12-12 18:47:10.931666091 +0000 UTC"}, Hostname:"ip-172-31-25-153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:47:11.224972 containerd[1990]: 2025-12-12 18:47:10.933 [INFO][5182] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:47:11.224972 containerd[1990]: 2025-12-12 18:47:10.999 [INFO][5182] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:47:11.224972 containerd[1990]: 2025-12-12 18:47:10.999 [INFO][5182] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-153' Dec 12 18:47:11.224972 containerd[1990]: 2025-12-12 18:47:11.034 [INFO][5182] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb" host="ip-172-31-25-153" Dec 12 18:47:11.224972 containerd[1990]: 2025-12-12 18:47:11.092 [INFO][5182] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-153" Dec 12 18:47:11.224972 containerd[1990]: 2025-12-12 18:47:11.117 [INFO][5182] ipam/ipam.go 511: Trying affinity for 192.168.61.128/26 host="ip-172-31-25-153" Dec 12 18:47:11.224972 containerd[1990]: 2025-12-12 18:47:11.123 [INFO][5182] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.128/26 host="ip-172-31-25-153" Dec 12 18:47:11.224972 containerd[1990]: 2025-12-12 18:47:11.134 [INFO][5182] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.128/26 host="ip-172-31-25-153" Dec 12 18:47:11.224972 containerd[1990]: 2025-12-12 18:47:11.134 [INFO][5182] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.61.128/26 handle="k8s-pod-network.8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb" host="ip-172-31-25-153" Dec 12 18:47:11.224972 containerd[1990]: 2025-12-12 18:47:11.141 [INFO][5182] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb Dec 12 18:47:11.224972 containerd[1990]: 2025-12-12 18:47:11.150 [INFO][5182] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.61.128/26 handle="k8s-pod-network.8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb" host="ip-172-31-25-153" Dec 12 18:47:11.224972 containerd[1990]: 2025-12-12 18:47:11.163 [INFO][5182] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.61.135/26] block=192.168.61.128/26 handle="k8s-pod-network.8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb" host="ip-172-31-25-153" Dec 12 18:47:11.224972 containerd[1990]: 2025-12-12 18:47:11.164 [INFO][5182] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.135/26] handle="k8s-pod-network.8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb" host="ip-172-31-25-153" Dec 12 18:47:11.224972 containerd[1990]: 2025-12-12 18:47:11.164 [INFO][5182] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:47:11.224972 containerd[1990]: 2025-12-12 18:47:11.164 [INFO][5182] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.61.135/26] IPv6=[] ContainerID="8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb" HandleID="k8s-pod-network.8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb" Workload="ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--ql2z4-eth0" Dec 12 18:47:11.227154 containerd[1990]: 2025-12-12 18:47:11.174 [INFO][5106] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb" Namespace="calico-apiserver" Pod="calico-apiserver-6f58b74bcb-ql2z4" WorkloadEndpoint="ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--ql2z4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--ql2z4-eth0", GenerateName:"calico-apiserver-6f58b74bcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"ba2f2d53-b502-4a41-a1a8-fae69661a05c", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 46, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f58b74bcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-153", ContainerID:"", Pod:"calico-apiserver-6f58b74bcb-ql2z4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali777530f0212", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:47:11.227154 containerd[1990]: 2025-12-12 18:47:11.174 [INFO][5106] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.135/32] ContainerID="8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb" Namespace="calico-apiserver" Pod="calico-apiserver-6f58b74bcb-ql2z4" WorkloadEndpoint="ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--ql2z4-eth0" Dec 12 18:47:11.227154 containerd[1990]: 2025-12-12 18:47:11.175 [INFO][5106] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali777530f0212 ContainerID="8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb" Namespace="calico-apiserver" Pod="calico-apiserver-6f58b74bcb-ql2z4" WorkloadEndpoint="ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--ql2z4-eth0" Dec 12 18:47:11.227154 containerd[1990]: 2025-12-12 18:47:11.181 [INFO][5106] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb" Namespace="calico-apiserver" Pod="calico-apiserver-6f58b74bcb-ql2z4" WorkloadEndpoint="ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--ql2z4-eth0" Dec 12 18:47:11.227154 containerd[1990]: 2025-12-12 18:47:11.184 [INFO][5106] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb" Namespace="calico-apiserver" Pod="calico-apiserver-6f58b74bcb-ql2z4" WorkloadEndpoint="ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--ql2z4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--ql2z4-eth0", GenerateName:"calico-apiserver-6f58b74bcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"ba2f2d53-b502-4a41-a1a8-fae69661a05c", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 46, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f58b74bcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-153", ContainerID:"8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb", Pod:"calico-apiserver-6f58b74bcb-ql2z4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali777530f0212", MAC:"5e:6e:d3:fa:2e:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:47:11.227154 containerd[1990]: 2025-12-12 18:47:11.216 [INFO][5106] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb" Namespace="calico-apiserver" Pod="calico-apiserver-6f58b74bcb-ql2z4" WorkloadEndpoint="ip--172--31--25--153-k8s-calico--apiserver--6f58b74bcb--ql2z4-eth0" Dec 12 18:47:11.292506 containerd[1990]: time="2025-12-12T18:47:11.292433260Z" level=info msg="StartContainer for \"4af72971deeee05e4647f1d4a0cfcb3c1dedfa7a3e1dc25f662a52522aa6a196\" returns successfully" Dec 12 18:47:11.308588 systemd-networkd[1833]: cali4bdbea801b2: Link UP Dec 12 18:47:11.309959 systemd-networkd[1833]: cali4bdbea801b2: Gained carrier Dec 12 18:47:11.329378 systemd-networkd[1833]: cali25649847639: Gained IPv6LL Dec 12 18:47:11.336265 containerd[1990]: 2025-12-12 18:47:10.680 [INFO][5129] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--153-k8s-coredns--674b8bbfcf--gc8gj-eth0 coredns-674b8bbfcf- kube-system 1f4fc72d-306f-401c-8038-da87f142a57b 812 0 2025-12-12 18:46:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-25-153 coredns-674b8bbfcf-gc8gj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4bdbea801b2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e" Namespace="kube-system" Pod="coredns-674b8bbfcf-gc8gj" WorkloadEndpoint="ip--172--31--25--153-k8s-coredns--674b8bbfcf--gc8gj-" Dec 12 18:47:11.336265 containerd[1990]: 2025-12-12 18:47:10.685 [INFO][5129] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e" Namespace="kube-system" Pod="coredns-674b8bbfcf-gc8gj" WorkloadEndpoint="ip--172--31--25--153-k8s-coredns--674b8bbfcf--gc8gj-eth0" Dec 12 18:47:11.336265 containerd[1990]: 2025-12-12 18:47:11.125 [INFO][5190] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e" HandleID="k8s-pod-network.17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e" Workload="ip--172--31--25--153-k8s-coredns--674b8bbfcf--gc8gj-eth0" Dec 12 18:47:11.336265 containerd[1990]: 2025-12-12 18:47:11.125 [INFO][5190] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e" HandleID="k8s-pod-network.17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e" Workload="ip--172--31--25--153-k8s-coredns--674b8bbfcf--gc8gj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000102720), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-25-153", "pod":"coredns-674b8bbfcf-gc8gj", "timestamp":"2025-12-12 18:47:11.125148074 +0000 UTC"}, Hostname:"ip-172-31-25-153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:47:11.336265 containerd[1990]: 2025-12-12 18:47:11.128 [INFO][5190] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:47:11.336265 containerd[1990]: 2025-12-12 18:47:11.166 [INFO][5190] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:47:11.336265 containerd[1990]: 2025-12-12 18:47:11.166 [INFO][5190] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-153' Dec 12 18:47:11.336265 containerd[1990]: 2025-12-12 18:47:11.197 [INFO][5190] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e" host="ip-172-31-25-153" Dec 12 18:47:11.336265 containerd[1990]: 2025-12-12 18:47:11.220 [INFO][5190] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-153" Dec 12 18:47:11.336265 containerd[1990]: 2025-12-12 18:47:11.231 [INFO][5190] ipam/ipam.go 511: Trying affinity for 192.168.61.128/26 host="ip-172-31-25-153" Dec 12 18:47:11.336265 containerd[1990]: 2025-12-12 18:47:11.242 [INFO][5190] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.128/26 host="ip-172-31-25-153" Dec 12 18:47:11.336265 containerd[1990]: 2025-12-12 18:47:11.249 [INFO][5190] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.128/26 host="ip-172-31-25-153" Dec 12 18:47:11.336265 containerd[1990]: 2025-12-12 18:47:11.249 [INFO][5190] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.61.128/26 handle="k8s-pod-network.17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e" host="ip-172-31-25-153" Dec 12 18:47:11.336265 containerd[1990]: 2025-12-12 18:47:11.260 [INFO][5190] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e Dec 12 18:47:11.336265 containerd[1990]: 2025-12-12 18:47:11.272 [INFO][5190] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.61.128/26 handle="k8s-pod-network.17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e" host="ip-172-31-25-153" Dec 12 18:47:11.336265 containerd[1990]: 2025-12-12 18:47:11.286 [INFO][5190] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.61.136/26] block=192.168.61.128/26 handle="k8s-pod-network.17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e" host="ip-172-31-25-153" Dec 12 18:47:11.336265 containerd[1990]: 2025-12-12 18:47:11.286 [INFO][5190] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.136/26] handle="k8s-pod-network.17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e" host="ip-172-31-25-153" Dec 12 18:47:11.336265 containerd[1990]: 2025-12-12 18:47:11.287 [INFO][5190] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:47:11.336265 containerd[1990]: 2025-12-12 18:47:11.287 [INFO][5190] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.61.136/26] IPv6=[] ContainerID="17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e" HandleID="k8s-pod-network.17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e" Workload="ip--172--31--25--153-k8s-coredns--674b8bbfcf--gc8gj-eth0" Dec 12 18:47:11.338367 containerd[1990]: 2025-12-12 18:47:11.297 [INFO][5129] cni-plugin/k8s.go 418: Populated endpoint ContainerID="17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e" Namespace="kube-system" Pod="coredns-674b8bbfcf-gc8gj" WorkloadEndpoint="ip--172--31--25--153-k8s-coredns--674b8bbfcf--gc8gj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--153-k8s-coredns--674b8bbfcf--gc8gj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1f4fc72d-306f-401c-8038-da87f142a57b", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 46, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-153", ContainerID:"", Pod:"coredns-674b8bbfcf-gc8gj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4bdbea801b2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:47:11.338367 containerd[1990]: 2025-12-12 18:47:11.298 [INFO][5129] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.136/32] ContainerID="17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e" Namespace="kube-system" Pod="coredns-674b8bbfcf-gc8gj" WorkloadEndpoint="ip--172--31--25--153-k8s-coredns--674b8bbfcf--gc8gj-eth0" Dec 12 18:47:11.338367 containerd[1990]: 2025-12-12 18:47:11.298 [INFO][5129] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4bdbea801b2 ContainerID="17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e" Namespace="kube-system" Pod="coredns-674b8bbfcf-gc8gj" WorkloadEndpoint="ip--172--31--25--153-k8s-coredns--674b8bbfcf--gc8gj-eth0" Dec 12 18:47:11.338367 containerd[1990]: 2025-12-12 18:47:11.310 [INFO][5129] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e" Namespace="kube-system" Pod="coredns-674b8bbfcf-gc8gj" WorkloadEndpoint="ip--172--31--25--153-k8s-coredns--674b8bbfcf--gc8gj-eth0" Dec 12 18:47:11.338367 containerd[1990]: 2025-12-12 18:47:11.312 [INFO][5129] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e" Namespace="kube-system" Pod="coredns-674b8bbfcf-gc8gj" WorkloadEndpoint="ip--172--31--25--153-k8s-coredns--674b8bbfcf--gc8gj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--153-k8s-coredns--674b8bbfcf--gc8gj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1f4fc72d-306f-401c-8038-da87f142a57b", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 46, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-153", ContainerID:"17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e", Pod:"coredns-674b8bbfcf-gc8gj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4bdbea801b2", MAC:"0a:cd:3c:c6:3e:96", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:47:11.338367 containerd[1990]: 2025-12-12 18:47:11.331 [INFO][5129] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e" Namespace="kube-system" Pod="coredns-674b8bbfcf-gc8gj" WorkloadEndpoint="ip--172--31--25--153-k8s-coredns--674b8bbfcf--gc8gj-eth0" Dec 12 18:47:11.357624 containerd[1990]: time="2025-12-12T18:47:11.357578110Z" level=info msg="connecting to shim 8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb" address="unix:///run/containerd/s/bf9f654cef3c113c9e270784548534112f1f0227eff7d2d519205f1a3968c157" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:47:11.367540 containerd[1990]: time="2025-12-12T18:47:11.367481435Z" level=info msg="connecting to shim 7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590" address="unix:///run/containerd/s/129997fcb22184304cbd574245aa2fc0cc86f716de1e44974e34597b31d906e9" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:47:11.370607 containerd[1990]: time="2025-12-12T18:47:11.370573598Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:47:11.378367 containerd[1990]: time="2025-12-12T18:47:11.378310167Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:47:11.379173 containerd[1990]: time="2025-12-12T18:47:11.378453745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:47:11.392128 kubelet[3543]: E1212 18:47:11.391728 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:47:11.393273 kubelet[3543]: E1212 18:47:11.392962 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:47:11.393747 containerd[1990]: time="2025-12-12T18:47:11.393435799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:47:11.432116 kubelet[3543]: E1212 18:47:11.431936 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:91ec2d7bb4f647fe886f9383a115c758,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5tjf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6448458988-2gsdl_calico-system(20a2fd47-4a22-4521-b92b-0d8c954400d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:47:11.457478 containerd[1990]: time="2025-12-12T18:47:11.455663053Z" level=info msg="connecting to shim 17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e" address="unix:///run/containerd/s/a6ea8c472b2d14094043d0a56320e2bcecfe236d9da81aa7ac6dec9cff494c60" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:47:11.480245 systemd[1]: Started cri-containerd-7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590.scope - libcontainer container 7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590. Dec 12 18:47:11.512982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2943777338.mount: Deactivated successfully. Dec 12 18:47:11.526533 containerd[1990]: time="2025-12-12T18:47:11.526215223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69dcd64969-ztnlv,Uid:e69104d4-3599-4ed4-87b8-edf0ec255633,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb70136bdff02c6570590d6210b29456a178f219c7d1cb935ee2036ae58231a3\"" Dec 12 18:47:11.533294 systemd[1]: Started cri-containerd-8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb.scope - libcontainer container 8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb. Dec 12 18:47:11.576441 systemd[1]: Started cri-containerd-17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e.scope - libcontainer container 17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e. Dec 12 18:47:11.586608 systemd-networkd[1833]: cali51e97aa6938: Gained IPv6LL Dec 12 18:47:11.659593 containerd[1990]: time="2025-12-12T18:47:11.659348479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gc8gj,Uid:1f4fc72d-306f-401c-8038-da87f142a57b,Namespace:kube-system,Attempt:0,} returns sandbox id \"17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e\"" Dec 12 18:47:11.669289 containerd[1990]: time="2025-12-12T18:47:11.669146497Z" level=info msg="CreateContainer within sandbox \"17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:47:11.700952 containerd[1990]: time="2025-12-12T18:47:11.700531239Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:47:11.709181 containerd[1990]: time="2025-12-12T18:47:11.705815770Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:47:11.709181 containerd[1990]: time="2025-12-12T18:47:11.705919420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:47:11.709948 kubelet[3543]: E1212 18:47:11.706107 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:47:11.709948 kubelet[3543]: E1212 18:47:11.706162 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:47:11.710135 containerd[1990]: time="2025-12-12T18:47:11.709503244Z" level=info msg="Container 42d71763abb518e28d0f85a98a6288c41c8925bd958c31cd5f91cae19505322b: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:47:11.710192 kubelet[3543]: E1212 18:47:11.706421 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8lgzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f58b74bcb-s6q4x_calico-apiserver(9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:47:11.712444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount863064805.mount: Deactivated successfully. Dec 12 18:47:11.715373 containerd[1990]: time="2025-12-12T18:47:11.713200260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:47:11.720820 kubelet[3543]: E1212 18:47:11.720763 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f58b74bcb-s6q4x" podUID="9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d" Dec 12 18:47:11.741232 containerd[1990]: time="2025-12-12T18:47:11.741174390Z" level=info msg="CreateContainer within sandbox \"17b63206b3e8c56d994c757c913b14cfe1f2807d01cdfbfa55e8ce9e4b0f799e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"42d71763abb518e28d0f85a98a6288c41c8925bd958c31cd5f91cae19505322b\"" Dec 12 18:47:11.742682 containerd[1990]: time="2025-12-12T18:47:11.742611307Z" level=info msg="StartContainer for \"42d71763abb518e28d0f85a98a6288c41c8925bd958c31cd5f91cae19505322b\"" Dec 12 18:47:11.744856 containerd[1990]: time="2025-12-12T18:47:11.744807735Z" level=info msg="connecting to shim 42d71763abb518e28d0f85a98a6288c41c8925bd958c31cd5f91cae19505322b" address="unix:///run/containerd/s/a6ea8c472b2d14094043d0a56320e2bcecfe236d9da81aa7ac6dec9cff494c60" protocol=ttrpc version=3 Dec 12 18:47:11.749875 containerd[1990]: time="2025-12-12T18:47:11.749825176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f58b74bcb-ql2z4,Uid:ba2f2d53-b502-4a41-a1a8-fae69661a05c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8af445192691ce90a004d46e2ec3e58db1a72dfe0f2a09f882a189402dc27bbb\"" Dec 12 18:47:11.756503 containerd[1990]: time="2025-12-12T18:47:11.756367067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-sjdx8,Uid:627e8918-ce59-4b1e-a58e-99fb7e0005f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"7138f6cc811ee86c96c1505a20020fcf169f46743a4449cd4529e311dd09f590\"" Dec 12 18:47:11.777697 systemd-networkd[1833]: cali6208eedc9c2: Gained IPv6LL Dec 12 18:47:11.787270 systemd[1]: Started cri-containerd-42d71763abb518e28d0f85a98a6288c41c8925bd958c31cd5f91cae19505322b.scope - libcontainer container 42d71763abb518e28d0f85a98a6288c41c8925bd958c31cd5f91cae19505322b. Dec 12 18:47:11.813050 kubelet[3543]: E1212 18:47:11.812869 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f58b74bcb-s6q4x" podUID="9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d" Dec 12 18:47:11.876477 kubelet[3543]: I1212 18:47:11.876311 3543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tmgnz" podStartSLOduration=48.876286211 podStartE2EDuration="48.876286211s" podCreationTimestamp="2025-12-12 18:46:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:47:11.875445887 +0000 UTC m=+53.847155161" watchObservedRunningTime="2025-12-12 18:47:11.876286211 +0000 UTC m=+53.847995485" Dec 12 18:47:11.877242 containerd[1990]: time="2025-12-12T18:47:11.877212573Z" level=info msg="StartContainer for \"42d71763abb518e28d0f85a98a6288c41c8925bd958c31cd5f91cae19505322b\" returns successfully" Dec 12 18:47:11.905498 systemd-networkd[1833]: cali78d8a7e22b9: Gained IPv6LL Dec 12 18:47:12.011209 containerd[1990]: time="2025-12-12T18:47:12.011162986Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:47:12.013560 containerd[1990]: time="2025-12-12T18:47:12.013476249Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:47:12.013691 containerd[1990]: time="2025-12-12T18:47:12.013570261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:47:12.013781 kubelet[3543]: E1212 18:47:12.013742 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:47:12.013866 kubelet[3543]: E1212 18:47:12.013788 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:47:12.014063 kubelet[3543]: E1212 18:47:12.013977 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sckx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8rtgc_calico-system(c534bc62-f909-4723-a1ce-dd8a325ef04d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:47:12.014623 containerd[1990]: time="2025-12-12T18:47:12.014587770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:47:12.161322 systemd-networkd[1833]: cali9f555624c77: Gained IPv6LL Dec 12 18:47:12.320531 containerd[1990]: time="2025-12-12T18:47:12.320409093Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:47:12.322558 containerd[1990]: time="2025-12-12T18:47:12.322503422Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:47:12.322708 containerd[1990]: time="2025-12-12T18:47:12.322518246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:47:12.322799 kubelet[3543]: E1212 18:47:12.322754 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:47:12.322847 kubelet[3543]: E1212 18:47:12.322808 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:47:12.323225 containerd[1990]: time="2025-12-12T18:47:12.323121300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:47:12.323748 kubelet[3543]: E1212 18:47:12.323287 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5tjf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6448458988-2gsdl_calico-system(20a2fd47-4a22-4521-b92b-0d8c954400d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:47:12.325013 kubelet[3543]: E1212 18:47:12.324963 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6448458988-2gsdl" podUID="20a2fd47-4a22-4521-b92b-0d8c954400d5" Dec 12 18:47:12.617134 containerd[1990]: time="2025-12-12T18:47:12.616904649Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:47:12.619324 containerd[1990]: time="2025-12-12T18:47:12.619231936Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:47:12.620091 containerd[1990]: time="2025-12-12T18:47:12.619287373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:47:12.620232 kubelet[3543]: E1212 18:47:12.619791 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:47:12.620232 kubelet[3543]: E1212 18:47:12.619844 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:47:12.621278 kubelet[3543]: E1212 18:47:12.620882 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qj2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-69dcd64969-ztnlv_calico-system(e69104d4-3599-4ed4-87b8-edf0ec255633): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:47:12.621451 containerd[1990]: time="2025-12-12T18:47:12.621022336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:47:12.622991 kubelet[3543]: E1212 18:47:12.622947 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69dcd64969-ztnlv" podUID="e69104d4-3599-4ed4-87b8-edf0ec255633" Dec 12 18:47:12.802203 systemd-networkd[1833]: cali777530f0212: Gained IPv6LL Dec 12 18:47:12.827720 kubelet[3543]: E1212 18:47:12.827667 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69dcd64969-ztnlv" podUID="e69104d4-3599-4ed4-87b8-edf0ec255633" Dec 12 18:47:12.828898 kubelet[3543]: E1212 18:47:12.828808 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6448458988-2gsdl" podUID="20a2fd47-4a22-4521-b92b-0d8c954400d5" Dec 12 18:47:12.828898 kubelet[3543]: E1212 18:47:12.828858 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f58b74bcb-s6q4x" podUID="9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d" Dec 12 18:47:12.906634 containerd[1990]: time="2025-12-12T18:47:12.906503781Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:47:12.909308 containerd[1990]: time="2025-12-12T18:47:12.909176582Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:47:12.909308 containerd[1990]: time="2025-12-12T18:47:12.909221460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:47:12.909653 kubelet[3543]: E1212 18:47:12.909580 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:47:12.909748 kubelet[3543]: E1212 18:47:12.909734 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:47:12.910223 containerd[1990]: time="2025-12-12T18:47:12.910183639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:47:12.910884 kubelet[3543]: E1212 18:47:12.910752 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzfq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f58b74bcb-ql2z4_calico-apiserver(ba2f2d53-b502-4a41-a1a8-fae69661a05c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:47:12.912609 kubelet[3543]: E1212 18:47:12.912145 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f58b74bcb-ql2z4" podUID="ba2f2d53-b502-4a41-a1a8-fae69661a05c" Dec 12 18:47:12.919532 kubelet[3543]: I1212 18:47:12.919059 3543 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gc8gj" podStartSLOduration=49.919044207 podStartE2EDuration="49.919044207s" podCreationTimestamp="2025-12-12 18:46:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:47:12.918939913 +0000 UTC m=+54.890649188" watchObservedRunningTime="2025-12-12 18:47:12.919044207 +0000 UTC m=+54.890753472" Dec 12 18:47:13.168394 containerd[1990]: time="2025-12-12T18:47:13.168265742Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:47:13.170861 containerd[1990]: time="2025-12-12T18:47:13.170808663Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:47:13.171001 containerd[1990]: time="2025-12-12T18:47:13.170826038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:47:13.171304 kubelet[3543]: E1212 18:47:13.171221 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:47:13.171304 kubelet[3543]: E1212 18:47:13.171278 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:47:13.171854 kubelet[3543]: E1212 18:47:13.171523 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lsgsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-sjdx8_calico-system(627e8918-ce59-4b1e-a58e-99fb7e0005f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:47:13.171979 containerd[1990]: time="2025-12-12T18:47:13.171597380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:47:13.173282 kubelet[3543]: E1212 18:47:13.173216 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sjdx8" podUID="627e8918-ce59-4b1e-a58e-99fb7e0005f5" Dec 12 18:47:13.249295 systemd-networkd[1833]: cali4bdbea801b2: Gained IPv6LL Dec 12 18:47:13.464169 containerd[1990]: time="2025-12-12T18:47:13.463891344Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:47:13.466388 containerd[1990]: time="2025-12-12T18:47:13.466225457Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:47:13.466388 containerd[1990]: time="2025-12-12T18:47:13.466252422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:47:13.466871 kubelet[3543]: E1212 18:47:13.466818 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:47:13.466936 kubelet[3543]: E1212 18:47:13.466876 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:47:13.467150 kubelet[3543]: E1212 18:47:13.467001 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sckx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8rtgc_calico-system(c534bc62-f909-4723-a1ce-dd8a325ef04d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:47:13.468236 kubelet[3543]: E1212 18:47:13.468189 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rtgc" podUID="c534bc62-f909-4723-a1ce-dd8a325ef04d" Dec 12 18:47:13.828535 kubelet[3543]: E1212 18:47:13.828378 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f58b74bcb-ql2z4" podUID="ba2f2d53-b502-4a41-a1a8-fae69661a05c" Dec 12 18:47:13.830079 kubelet[3543]: E1212 18:47:13.829850 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sjdx8" podUID="627e8918-ce59-4b1e-a58e-99fb7e0005f5" Dec 12 18:47:13.830948 kubelet[3543]: E1212 18:47:13.830859 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rtgc" podUID="c534bc62-f909-4723-a1ce-dd8a325ef04d" Dec 12 18:47:13.870194 systemd[1]: Started sshd@10-172.31.25.153:22-139.178.89.65:48816.service - OpenSSH per-connection server daemon (139.178.89.65:48816). Dec 12 18:47:14.091520 sshd[5534]: Accepted publickey for core from 139.178.89.65 port 48816 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:47:14.095505 sshd-session[5534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:47:14.102792 systemd-logind[1966]: New session 11 of user core. Dec 12 18:47:14.107439 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 18:47:14.480342 sshd[5539]: Connection closed by 139.178.89.65 port 48816 Dec 12 18:47:14.481411 sshd-session[5534]: pam_unix(sshd:session): session closed for user core Dec 12 18:47:14.485634 systemd[1]: sshd@10-172.31.25.153:22-139.178.89.65:48816.service: Deactivated successfully. Dec 12 18:47:14.487685 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 18:47:14.489120 systemd-logind[1966]: Session 11 logged out. Waiting for processes to exit. Dec 12 18:47:14.491240 systemd-logind[1966]: Removed session 11. Dec 12 18:47:15.862049 ntpd[2222]: Listen normally on 6 vxlan.calico 192.168.61.128:123 Dec 12 18:47:15.862803 ntpd[2222]: 12 Dec 18:47:15 ntpd[2222]: Listen normally on 6 vxlan.calico 192.168.61.128:123 Dec 12 18:47:15.862803 ntpd[2222]: 12 Dec 18:47:15 ntpd[2222]: Listen normally on 7 vxlan.calico [fe80::6464:2cff:fe8e:c419%4]:123 Dec 12 18:47:15.862803 ntpd[2222]: 12 Dec 18:47:15 ntpd[2222]: Listen normally on 8 cali25649847639 [fe80::ecee:eeff:feee:eeee%7]:123 Dec 12 18:47:15.862803 ntpd[2222]: 12 Dec 18:47:15 ntpd[2222]: Listen normally on 9 cali6208eedc9c2 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 12 18:47:15.862803 ntpd[2222]: 12 Dec 18:47:15 ntpd[2222]: Listen normally on 10 calieb6cb3c02fa [fe80::ecee:eeff:feee:eeee%9]:123 Dec 12 18:47:15.862803 ntpd[2222]: 12 Dec 18:47:15 ntpd[2222]: Listen normally on 11 cali51e97aa6938 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 12 18:47:15.862803 ntpd[2222]: 12 Dec 18:47:15 ntpd[2222]: Listen normally on 12 cali78d8a7e22b9 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 12 18:47:15.862803 ntpd[2222]: 12 Dec 18:47:15 ntpd[2222]: Listen normally on 13 cali9f555624c77 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 12 18:47:15.862803 ntpd[2222]: 12 Dec 18:47:15 ntpd[2222]: Listen normally on 14 cali777530f0212 [fe80::ecee:eeff:feee:eeee%13]:123 Dec 12 18:47:15.862803 ntpd[2222]: 12 Dec 18:47:15 ntpd[2222]: Listen normally on 15 cali4bdbea801b2 [fe80::ecee:eeff:feee:eeee%14]:123 Dec 12 18:47:15.862119 ntpd[2222]: Listen normally on 7 vxlan.calico [fe80::6464:2cff:fe8e:c419%4]:123 Dec 12 18:47:15.862146 ntpd[2222]: Listen normally on 8 cali25649847639 [fe80::ecee:eeff:feee:eeee%7]:123 Dec 12 18:47:15.862166 ntpd[2222]: Listen normally on 9 cali6208eedc9c2 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 12 18:47:15.862192 ntpd[2222]: Listen normally on 10 calieb6cb3c02fa [fe80::ecee:eeff:feee:eeee%9]:123 Dec 12 18:47:15.862218 ntpd[2222]: Listen normally on 11 cali51e97aa6938 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 12 18:47:15.862238 ntpd[2222]: Listen normally on 12 cali78d8a7e22b9 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 12 18:47:15.862259 ntpd[2222]: Listen normally on 13 cali9f555624c77 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 12 18:47:15.862289 ntpd[2222]: Listen normally on 14 cali777530f0212 [fe80::ecee:eeff:feee:eeee%13]:123 Dec 12 18:47:15.862308 ntpd[2222]: Listen normally on 15 cali4bdbea801b2 [fe80::ecee:eeff:feee:eeee%14]:123 Dec 12 18:47:19.520280 systemd[1]: Started sshd@11-172.31.25.153:22-139.178.89.65:48818.service - OpenSSH per-connection server daemon (139.178.89.65:48818). Dec 12 18:47:19.739572 sshd[5571]: Accepted publickey for core from 139.178.89.65 port 48818 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:47:19.742628 sshd-session[5571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:47:19.748614 systemd-logind[1966]: New session 12 of user core. Dec 12 18:47:19.756309 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 18:47:20.043503 sshd[5574]: Connection closed by 139.178.89.65 port 48818 Dec 12 18:47:20.044279 sshd-session[5571]: pam_unix(sshd:session): session closed for user core Dec 12 18:47:20.048551 systemd[1]: sshd@11-172.31.25.153:22-139.178.89.65:48818.service: Deactivated successfully. Dec 12 18:47:20.051481 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 18:47:20.055117 systemd-logind[1966]: Session 12 logged out. Waiting for processes to exit. Dec 12 18:47:20.056603 systemd-logind[1966]: Removed session 12. Dec 12 18:47:20.078478 systemd[1]: Started sshd@12-172.31.25.153:22-139.178.89.65:48820.service - OpenSSH per-connection server daemon (139.178.89.65:48820). Dec 12 18:47:20.261468 sshd[5586]: Accepted publickey for core from 139.178.89.65 port 48820 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:47:20.263010 sshd-session[5586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:47:20.269470 systemd-logind[1966]: New session 13 of user core. Dec 12 18:47:20.272199 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 18:47:20.589743 sshd[5589]: Connection closed by 139.178.89.65 port 48820 Dec 12 18:47:20.592788 sshd-session[5586]: pam_unix(sshd:session): session closed for user core Dec 12 18:47:20.602559 systemd[1]: sshd@12-172.31.25.153:22-139.178.89.65:48820.service: Deactivated successfully. Dec 12 18:47:20.609676 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 18:47:20.614420 systemd-logind[1966]: Session 13 logged out. Waiting for processes to exit. Dec 12 18:47:20.630611 systemd[1]: Started sshd@13-172.31.25.153:22-139.178.89.65:56712.service - OpenSSH per-connection server daemon (139.178.89.65:56712). Dec 12 18:47:20.631951 systemd-logind[1966]: Removed session 13. Dec 12 18:47:20.825782 sshd[5599]: Accepted publickey for core from 139.178.89.65 port 56712 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:47:20.827296 sshd-session[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:47:20.834148 systemd-logind[1966]: New session 14 of user core. Dec 12 18:47:20.840338 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 18:47:21.086833 sshd[5602]: Connection closed by 139.178.89.65 port 56712 Dec 12 18:47:21.088247 sshd-session[5599]: pam_unix(sshd:session): session closed for user core Dec 12 18:47:21.093615 systemd[1]: sshd@13-172.31.25.153:22-139.178.89.65:56712.service: Deactivated successfully. Dec 12 18:47:21.095985 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 18:47:21.097772 systemd-logind[1966]: Session 14 logged out. Waiting for processes to exit. Dec 12 18:47:21.099989 systemd-logind[1966]: Removed session 14. Dec 12 18:47:25.208817 containerd[1990]: time="2025-12-12T18:47:25.208471395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:47:25.499261 containerd[1990]: time="2025-12-12T18:47:25.499120965Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:47:25.501471 containerd[1990]: time="2025-12-12T18:47:25.501356747Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:47:25.501471 containerd[1990]: time="2025-12-12T18:47:25.501434336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:47:25.501631 kubelet[3543]: E1212 18:47:25.501584 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:47:25.502001 kubelet[3543]: E1212 18:47:25.501629 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:47:25.502001 kubelet[3543]: E1212 18:47:25.501805 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:91ec2d7bb4f647fe886f9383a115c758,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5tjf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6448458988-2gsdl_calico-system(20a2fd47-4a22-4521-b92b-0d8c954400d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:47:25.502984 containerd[1990]: time="2025-12-12T18:47:25.502910148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:47:25.797703 containerd[1990]: time="2025-12-12T18:47:25.797654070Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:47:25.800133 containerd[1990]: time="2025-12-12T18:47:25.800017502Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:47:25.800367 containerd[1990]: time="2025-12-12T18:47:25.800087844Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:47:25.800464 kubelet[3543]: E1212 18:47:25.800426 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:47:25.800529 kubelet[3543]: E1212 18:47:25.800492 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:47:25.801019 kubelet[3543]: E1212 18:47:25.800966 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8lgzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f58b74bcb-s6q4x_calico-apiserver(9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:47:25.801812 containerd[1990]: time="2025-12-12T18:47:25.801784294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:47:25.802337 kubelet[3543]: E1212 18:47:25.802211 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f58b74bcb-s6q4x" podUID="9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d" Dec 12 18:47:26.075427 containerd[1990]: time="2025-12-12T18:47:26.075175708Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:47:26.077499 containerd[1990]: time="2025-12-12T18:47:26.077418764Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:47:26.077838 containerd[1990]: time="2025-12-12T18:47:26.077486562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:47:26.078056 kubelet[3543]: E1212 18:47:26.077994 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:47:26.078109 kubelet[3543]: E1212 18:47:26.078070 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:47:26.078297 kubelet[3543]: E1212 18:47:26.078194 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5tjf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6448458988-2gsdl_calico-system(20a2fd47-4a22-4521-b92b-0d8c954400d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:47:26.079902 kubelet[3543]: E1212 18:47:26.079829 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6448458988-2gsdl" podUID="20a2fd47-4a22-4521-b92b-0d8c954400d5" Dec 12 18:47:26.121705 systemd[1]: Started sshd@14-172.31.25.153:22-139.178.89.65:56714.service - OpenSSH per-connection server daemon (139.178.89.65:56714). Dec 12 18:47:26.295028 sshd[5628]: Accepted publickey for core from 139.178.89.65 port 56714 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:47:26.296489 sshd-session[5628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:47:26.305000 systemd-logind[1966]: New session 15 of user core. Dec 12 18:47:26.312273 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 18:47:26.560106 sshd[5631]: Connection closed by 139.178.89.65 port 56714 Dec 12 18:47:26.560468 sshd-session[5628]: pam_unix(sshd:session): session closed for user core Dec 12 18:47:26.566558 systemd[1]: sshd@14-172.31.25.153:22-139.178.89.65:56714.service: Deactivated successfully. Dec 12 18:47:26.569603 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 18:47:26.571158 systemd-logind[1966]: Session 15 logged out. Waiting for processes to exit. Dec 12 18:47:26.572835 systemd-logind[1966]: Removed session 15. Dec 12 18:47:27.210189 containerd[1990]: time="2025-12-12T18:47:27.210090985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:47:27.484475 containerd[1990]: time="2025-12-12T18:47:27.484348396Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:47:27.489127 containerd[1990]: time="2025-12-12T18:47:27.488998949Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:47:27.489301 containerd[1990]: time="2025-12-12T18:47:27.489067101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:47:27.489437 kubelet[3543]: E1212 18:47:27.489369 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:47:27.489840 kubelet[3543]: E1212 18:47:27.489445 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:47:27.490436 containerd[1990]: time="2025-12-12T18:47:27.490184720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:47:27.490526 kubelet[3543]: E1212 18:47:27.489899 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lsgsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-sjdx8_calico-system(627e8918-ce59-4b1e-a58e-99fb7e0005f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:47:27.495104 kubelet[3543]: E1212 18:47:27.495015 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sjdx8" podUID="627e8918-ce59-4b1e-a58e-99fb7e0005f5" Dec 12 18:47:27.763993 containerd[1990]: time="2025-12-12T18:47:27.763940510Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:47:27.766172 containerd[1990]: time="2025-12-12T18:47:27.766115246Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:47:27.766292 containerd[1990]: time="2025-12-12T18:47:27.766135809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:47:27.766439 kubelet[3543]: E1212 18:47:27.766398 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:47:27.766489 kubelet[3543]: E1212 18:47:27.766443 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:47:27.766758 kubelet[3543]: E1212 18:47:27.766704 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzfq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f58b74bcb-ql2z4_calico-apiserver(ba2f2d53-b502-4a41-a1a8-fae69661a05c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:47:27.767436 containerd[1990]: time="2025-12-12T18:47:27.767386267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:47:27.768610 kubelet[3543]: E1212 18:47:27.768509 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f58b74bcb-ql2z4" podUID="ba2f2d53-b502-4a41-a1a8-fae69661a05c" Dec 12 18:47:28.049621 containerd[1990]: time="2025-12-12T18:47:28.049477630Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:47:28.051779 containerd[1990]: time="2025-12-12T18:47:28.051710893Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:47:28.051947 containerd[1990]: time="2025-12-12T18:47:28.051799745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:47:28.051986 kubelet[3543]: E1212 18:47:28.051942 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:47:28.052079 kubelet[3543]: E1212 18:47:28.051998 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:47:28.052590 containerd[1990]: time="2025-12-12T18:47:28.052333193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:47:28.052794 kubelet[3543]: E1212 18:47:28.052353 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sckx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8rtgc_calico-system(c534bc62-f909-4723-a1ce-dd8a325ef04d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:47:28.316077 containerd[1990]: time="2025-12-12T18:47:28.315934315Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:47:28.318143 containerd[1990]: time="2025-12-12T18:47:28.318089650Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:47:28.318254 containerd[1990]: time="2025-12-12T18:47:28.318174977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:47:28.318461 kubelet[3543]: E1212 18:47:28.318395 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:47:28.318575 kubelet[3543]: E1212 18:47:28.318466 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:47:28.318780 kubelet[3543]: E1212 18:47:28.318737 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qj2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-69dcd64969-ztnlv_calico-system(e69104d4-3599-4ed4-87b8-edf0ec255633): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:47:28.319557 containerd[1990]: time="2025-12-12T18:47:28.319516734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:47:28.320908 kubelet[3543]: E1212 18:47:28.320824 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69dcd64969-ztnlv" podUID="e69104d4-3599-4ed4-87b8-edf0ec255633" Dec 12 18:47:28.603485 containerd[1990]: time="2025-12-12T18:47:28.603304982Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:47:28.605571 containerd[1990]: time="2025-12-12T18:47:28.605434798Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:47:28.605571 containerd[1990]: time="2025-12-12T18:47:28.605474838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:47:28.605789 kubelet[3543]: E1212 18:47:28.605726 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:47:28.606346 kubelet[3543]: E1212 18:47:28.605798 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:47:28.606346 kubelet[3543]: E1212 18:47:28.605956 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sckx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8rtgc_calico-system(c534bc62-f909-4723-a1ce-dd8a325ef04d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:47:28.607623 kubelet[3543]: E1212 18:47:28.607569 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rtgc" podUID="c534bc62-f909-4723-a1ce-dd8a325ef04d" Dec 12 18:47:31.594983 systemd[1]: Started sshd@15-172.31.25.153:22-139.178.89.65:39154.service - OpenSSH per-connection server daemon (139.178.89.65:39154). Dec 12 18:47:31.811977 sshd[5651]: Accepted publickey for core from 139.178.89.65 port 39154 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:47:31.847724 sshd-session[5651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:47:31.853645 systemd-logind[1966]: New session 16 of user core. Dec 12 18:47:31.858244 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 18:47:32.192386 sshd[5679]: Connection closed by 139.178.89.65 port 39154 Dec 12 18:47:32.193107 sshd-session[5651]: pam_unix(sshd:session): session closed for user core Dec 12 18:47:32.197240 systemd[1]: sshd@15-172.31.25.153:22-139.178.89.65:39154.service: Deactivated successfully. Dec 12 18:47:32.203618 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 18:47:32.205343 systemd-logind[1966]: Session 16 logged out. Waiting for processes to exit. Dec 12 18:47:32.209562 systemd-logind[1966]: Removed session 16. Dec 12 18:47:37.230260 systemd[1]: Started sshd@16-172.31.25.153:22-139.178.89.65:39170.service - OpenSSH per-connection server daemon (139.178.89.65:39170). Dec 12 18:47:37.429662 sshd[5692]: Accepted publickey for core from 139.178.89.65 port 39170 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:47:37.431171 sshd-session[5692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:47:37.437910 systemd-logind[1966]: New session 17 of user core. Dec 12 18:47:37.441257 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 18:47:37.657742 sshd[5695]: Connection closed by 139.178.89.65 port 39170 Dec 12 18:47:37.658289 sshd-session[5692]: pam_unix(sshd:session): session closed for user core Dec 12 18:47:37.664057 systemd[1]: sshd@16-172.31.25.153:22-139.178.89.65:39170.service: Deactivated successfully. Dec 12 18:47:37.667100 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 18:47:37.670526 systemd-logind[1966]: Session 17 logged out. Waiting for processes to exit. Dec 12 18:47:37.673365 systemd-logind[1966]: Removed session 17. Dec 12 18:47:38.215456 kubelet[3543]: E1212 18:47:38.215266 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f58b74bcb-s6q4x" podUID="9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d" Dec 12 18:47:38.216600 kubelet[3543]: E1212 18:47:38.215815 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sjdx8" podUID="627e8918-ce59-4b1e-a58e-99fb7e0005f5" Dec 12 18:47:40.211424 kubelet[3543]: E1212 18:47:40.210340 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f58b74bcb-ql2z4" podUID="ba2f2d53-b502-4a41-a1a8-fae69661a05c" Dec 12 18:47:40.213521 kubelet[3543]: E1212 18:47:40.211942 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6448458988-2gsdl" podUID="20a2fd47-4a22-4521-b92b-0d8c954400d5" Dec 12 18:47:42.211204 kubelet[3543]: E1212 18:47:42.211123 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69dcd64969-ztnlv" podUID="e69104d4-3599-4ed4-87b8-edf0ec255633" Dec 12 18:47:42.694849 systemd[1]: Started sshd@17-172.31.25.153:22-139.178.89.65:36240.service - OpenSSH per-connection server daemon (139.178.89.65:36240). Dec 12 18:47:42.954686 sshd[5712]: Accepted publickey for core from 139.178.89.65 port 36240 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:47:42.957496 sshd-session[5712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:47:42.964571 systemd-logind[1966]: New session 18 of user core. Dec 12 18:47:42.970294 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 18:47:43.211668 kubelet[3543]: E1212 18:47:43.211420 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rtgc" podUID="c534bc62-f909-4723-a1ce-dd8a325ef04d" Dec 12 18:47:43.691002 sshd[5715]: Connection closed by 139.178.89.65 port 36240 Dec 12 18:47:43.691914 sshd-session[5712]: pam_unix(sshd:session): session closed for user core Dec 12 18:47:43.697244 systemd[1]: sshd@17-172.31.25.153:22-139.178.89.65:36240.service: Deactivated successfully. Dec 12 18:47:43.699895 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 18:47:43.703434 systemd-logind[1966]: Session 18 logged out. Waiting for processes to exit. Dec 12 18:47:43.704808 systemd-logind[1966]: Removed session 18. Dec 12 18:47:43.736511 systemd[1]: Started sshd@18-172.31.25.153:22-139.178.89.65:36248.service - OpenSSH per-connection server daemon (139.178.89.65:36248). Dec 12 18:47:43.912347 sshd[5727]: Accepted publickey for core from 139.178.89.65 port 36248 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:47:43.914571 sshd-session[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:47:43.921113 systemd-logind[1966]: New session 19 of user core. Dec 12 18:47:43.925431 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 18:47:47.634458 sshd[5730]: Connection closed by 139.178.89.65 port 36248 Dec 12 18:47:47.635650 sshd-session[5727]: pam_unix(sshd:session): session closed for user core Dec 12 18:47:47.645974 systemd[1]: sshd@18-172.31.25.153:22-139.178.89.65:36248.service: Deactivated successfully. Dec 12 18:47:47.649028 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 18:47:47.650171 systemd-logind[1966]: Session 19 logged out. Waiting for processes to exit. Dec 12 18:47:47.652595 systemd-logind[1966]: Removed session 19. Dec 12 18:47:47.666581 systemd[1]: Started sshd@19-172.31.25.153:22-139.178.89.65:36250.service - OpenSSH per-connection server daemon (139.178.89.65:36250). Dec 12 18:47:47.871173 sshd[5746]: Accepted publickey for core from 139.178.89.65 port 36250 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:47:47.872912 sshd-session[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:47:47.878939 systemd-logind[1966]: New session 20 of user core. Dec 12 18:47:47.886316 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 18:47:48.979579 sshd[5749]: Connection closed by 139.178.89.65 port 36250 Dec 12 18:47:48.980554 sshd-session[5746]: pam_unix(sshd:session): session closed for user core Dec 12 18:47:48.999492 systemd[1]: sshd@19-172.31.25.153:22-139.178.89.65:36250.service: Deactivated successfully. Dec 12 18:47:49.006894 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 18:47:49.010906 systemd-logind[1966]: Session 20 logged out. Waiting for processes to exit. Dec 12 18:47:49.027843 systemd[1]: Started sshd@20-172.31.25.153:22-139.178.89.65:36258.service - OpenSSH per-connection server daemon (139.178.89.65:36258). Dec 12 18:47:49.029387 systemd-logind[1966]: Removed session 20. Dec 12 18:47:49.220201 sshd[5770]: Accepted publickey for core from 139.178.89.65 port 36258 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:47:49.222085 sshd-session[5770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:47:49.228158 systemd-logind[1966]: New session 21 of user core. Dec 12 18:47:49.234286 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 18:47:50.097281 sshd[5774]: Connection closed by 139.178.89.65 port 36258 Dec 12 18:47:50.098313 sshd-session[5770]: pam_unix(sshd:session): session closed for user core Dec 12 18:47:50.102916 systemd-logind[1966]: Session 21 logged out. Waiting for processes to exit. Dec 12 18:47:50.103061 systemd[1]: sshd@20-172.31.25.153:22-139.178.89.65:36258.service: Deactivated successfully. Dec 12 18:47:50.105144 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 18:47:50.107066 systemd-logind[1966]: Removed session 21. Dec 12 18:47:50.141723 systemd[1]: Started sshd@21-172.31.25.153:22-139.178.89.65:36270.service - OpenSSH per-connection server daemon (139.178.89.65:36270). Dec 12 18:47:50.212719 containerd[1990]: time="2025-12-12T18:47:50.212468440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:47:50.344000 sshd[5784]: Accepted publickey for core from 139.178.89.65 port 36270 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:47:50.346123 sshd-session[5784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:47:50.371221 systemd-logind[1966]: New session 22 of user core. Dec 12 18:47:50.385580 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 18:47:50.489908 containerd[1990]: time="2025-12-12T18:47:50.489839832Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:47:50.492181 containerd[1990]: time="2025-12-12T18:47:50.492113525Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:47:50.492380 containerd[1990]: time="2025-12-12T18:47:50.492232078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:47:50.492502 kubelet[3543]: E1212 18:47:50.492442 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:47:50.493196 kubelet[3543]: E1212 18:47:50.492512 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:47:50.493196 kubelet[3543]: E1212 18:47:50.492832 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lsgsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-sjdx8_calico-system(627e8918-ce59-4b1e-a58e-99fb7e0005f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:47:50.500833 kubelet[3543]: E1212 18:47:50.494450 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sjdx8" podUID="627e8918-ce59-4b1e-a58e-99fb7e0005f5" Dec 12 18:47:50.611741 sshd[5787]: Connection closed by 139.178.89.65 port 36270 Dec 12 18:47:50.612490 sshd-session[5784]: pam_unix(sshd:session): session closed for user core Dec 12 18:47:50.618159 systemd[1]: sshd@21-172.31.25.153:22-139.178.89.65:36270.service: Deactivated successfully. Dec 12 18:47:50.620910 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 18:47:50.622531 systemd-logind[1966]: Session 22 logged out. Waiting for processes to exit. Dec 12 18:47:50.625544 systemd-logind[1966]: Removed session 22. Dec 12 18:47:53.210706 containerd[1990]: time="2025-12-12T18:47:53.210626433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:47:53.537610 containerd[1990]: time="2025-12-12T18:47:53.537537575Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:47:53.539781 containerd[1990]: time="2025-12-12T18:47:53.539709190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:47:53.539923 containerd[1990]: time="2025-12-12T18:47:53.539711317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:47:53.540179 kubelet[3543]: E1212 18:47:53.540128 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:47:53.540837 kubelet[3543]: E1212 18:47:53.540189 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:47:53.540837 kubelet[3543]: E1212 18:47:53.540579 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzfq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f58b74bcb-ql2z4_calico-apiserver(ba2f2d53-b502-4a41-a1a8-fae69661a05c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:47:53.541596 containerd[1990]: time="2025-12-12T18:47:53.541567729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:47:53.542010 kubelet[3543]: E1212 18:47:53.541835 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f58b74bcb-ql2z4" podUID="ba2f2d53-b502-4a41-a1a8-fae69661a05c" Dec 12 18:47:53.861701 containerd[1990]: time="2025-12-12T18:47:53.861573008Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:47:53.863891 containerd[1990]: time="2025-12-12T18:47:53.863810453Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:47:53.864132 containerd[1990]: time="2025-12-12T18:47:53.863909420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:47:53.864181 kubelet[3543]: E1212 18:47:53.864098 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:47:53.864181 kubelet[3543]: E1212 18:47:53.864144 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:47:53.864439 containerd[1990]: time="2025-12-12T18:47:53.864420528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:47:53.864939 kubelet[3543]: E1212 18:47:53.864856 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qj2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-69dcd64969-ztnlv_calico-system(e69104d4-3599-4ed4-87b8-edf0ec255633): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:47:53.866274 kubelet[3543]: E1212 18:47:53.866233 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69dcd64969-ztnlv" podUID="e69104d4-3599-4ed4-87b8-edf0ec255633" Dec 12 18:47:54.166137 containerd[1990]: time="2025-12-12T18:47:54.165984750Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:47:54.168262 containerd[1990]: time="2025-12-12T18:47:54.168118975Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:47:54.168262 containerd[1990]: time="2025-12-12T18:47:54.168235735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:47:54.168751 kubelet[3543]: E1212 18:47:54.168691 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:47:54.168878 kubelet[3543]: E1212 18:47:54.168761 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:47:54.169434 kubelet[3543]: E1212 18:47:54.169143 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:91ec2d7bb4f647fe886f9383a115c758,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5tjf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6448458988-2gsdl_calico-system(20a2fd47-4a22-4521-b92b-0d8c954400d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:47:54.169774 containerd[1990]: time="2025-12-12T18:47:54.169670639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:47:54.432943 containerd[1990]: time="2025-12-12T18:47:54.432747558Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:47:54.435140 containerd[1990]: time="2025-12-12T18:47:54.434948958Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:47:54.435265 containerd[1990]: time="2025-12-12T18:47:54.434969359Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:47:54.435500 kubelet[3543]: E1212 18:47:54.435446 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:47:54.435500 kubelet[3543]: E1212 18:47:54.435497 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:47:54.435824 kubelet[3543]: E1212 18:47:54.435737 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8lgzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f58b74bcb-s6q4x_calico-apiserver(9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:47:54.436392 containerd[1990]: time="2025-12-12T18:47:54.436362525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:47:54.436908 kubelet[3543]: E1212 18:47:54.436834 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f58b74bcb-s6q4x" podUID="9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d" Dec 12 18:47:54.717076 containerd[1990]: time="2025-12-12T18:47:54.716912135Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:47:54.719173 containerd[1990]: time="2025-12-12T18:47:54.719121418Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:47:54.719319 containerd[1990]: time="2025-12-12T18:47:54.719221286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:47:54.719658 kubelet[3543]: E1212 18:47:54.719616 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:47:54.720021 kubelet[3543]: E1212 18:47:54.719664 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:47:54.720021 kubelet[3543]: E1212 18:47:54.719815 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5tjf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6448458988-2gsdl_calico-system(20a2fd47-4a22-4521-b92b-0d8c954400d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:47:54.721393 kubelet[3543]: E1212 18:47:54.721338 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6448458988-2gsdl" podUID="20a2fd47-4a22-4521-b92b-0d8c954400d5" Dec 12 18:47:55.651364 systemd[1]: Started sshd@22-172.31.25.153:22-139.178.89.65:47264.service - OpenSSH per-connection server daemon (139.178.89.65:47264). Dec 12 18:47:55.842468 sshd[5805]: Accepted publickey for core from 139.178.89.65 port 47264 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:47:55.843891 sshd-session[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:47:55.850885 systemd-logind[1966]: New session 23 of user core. Dec 12 18:47:55.860296 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 18:47:56.095836 sshd[5808]: Connection closed by 139.178.89.65 port 47264 Dec 12 18:47:56.097797 sshd-session[5805]: pam_unix(sshd:session): session closed for user core Dec 12 18:47:56.101956 systemd-logind[1966]: Session 23 logged out. Waiting for processes to exit. Dec 12 18:47:56.102220 systemd[1]: sshd@22-172.31.25.153:22-139.178.89.65:47264.service: Deactivated successfully. Dec 12 18:47:56.104908 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 18:47:56.106963 systemd-logind[1966]: Removed session 23. Dec 12 18:47:56.210805 containerd[1990]: time="2025-12-12T18:47:56.210544193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:47:56.499137 containerd[1990]: time="2025-12-12T18:47:56.498996339Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:47:56.501058 containerd[1990]: time="2025-12-12T18:47:56.500968825Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:47:56.501281 containerd[1990]: time="2025-12-12T18:47:56.501097634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:47:56.501515 kubelet[3543]: E1212 18:47:56.501473 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:47:56.501952 kubelet[3543]: E1212 18:47:56.501529 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:47:56.501952 kubelet[3543]: E1212 18:47:56.501689 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sckx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8rtgc_calico-system(c534bc62-f909-4723-a1ce-dd8a325ef04d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:47:56.505242 containerd[1990]: time="2025-12-12T18:47:56.505205238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:47:56.802352 containerd[1990]: time="2025-12-12T18:47:56.802280330Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:47:56.804350 containerd[1990]: time="2025-12-12T18:47:56.804244967Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:47:56.804901 containerd[1990]: time="2025-12-12T18:47:56.804305036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:47:56.805173 kubelet[3543]: E1212 18:47:56.804891 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:47:56.805173 kubelet[3543]: E1212 18:47:56.804950 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:47:56.805415 kubelet[3543]: E1212 18:47:56.805359 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sckx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8rtgc_calico-system(c534bc62-f909-4723-a1ce-dd8a325ef04d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:47:56.807133 kubelet[3543]: E1212 18:47:56.807078 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rtgc" podUID="c534bc62-f909-4723-a1ce-dd8a325ef04d" Dec 12 18:48:01.135831 systemd[1]: Started sshd@23-172.31.25.153:22-139.178.89.65:39082.service - OpenSSH per-connection server daemon (139.178.89.65:39082). Dec 12 18:48:01.617165 sshd[5820]: Accepted publickey for core from 139.178.89.65 port 39082 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:48:01.626554 sshd-session[5820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:01.658446 systemd-logind[1966]: New session 24 of user core. Dec 12 18:48:01.677248 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 18:48:03.475559 sshd[5832]: Connection closed by 139.178.89.65 port 39082 Dec 12 18:48:03.487422 sshd-session[5820]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:03.514078 systemd[1]: sshd@23-172.31.25.153:22-139.178.89.65:39082.service: Deactivated successfully. Dec 12 18:48:03.541407 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 18:48:03.563268 systemd-logind[1966]: Session 24 logged out. Waiting for processes to exit. Dec 12 18:48:03.571023 systemd-logind[1966]: Removed session 24. Dec 12 18:48:04.225009 kubelet[3543]: E1212 18:48:04.224469 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sjdx8" podUID="627e8918-ce59-4b1e-a58e-99fb7e0005f5" Dec 12 18:48:05.210811 kubelet[3543]: E1212 18:48:05.210753 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f58b74bcb-ql2z4" podUID="ba2f2d53-b502-4a41-a1a8-fae69661a05c" Dec 12 18:48:07.210027 kubelet[3543]: E1212 18:48:07.209957 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f58b74bcb-s6q4x" podUID="9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d" Dec 12 18:48:07.211672 kubelet[3543]: E1212 18:48:07.211551 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69dcd64969-ztnlv" podUID="e69104d4-3599-4ed4-87b8-edf0ec255633" Dec 12 18:48:08.217895 kubelet[3543]: E1212 18:48:08.217839 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6448458988-2gsdl" podUID="20a2fd47-4a22-4521-b92b-0d8c954400d5" Dec 12 18:48:08.512160 systemd[1]: Started sshd@24-172.31.25.153:22-139.178.89.65:39092.service - OpenSSH per-connection server daemon (139.178.89.65:39092). Dec 12 18:48:08.712079 sshd[5862]: Accepted publickey for core from 139.178.89.65 port 39092 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:48:08.717338 sshd-session[5862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:08.729928 systemd-logind[1966]: New session 25 of user core. Dec 12 18:48:08.735016 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 12 18:48:08.986305 sshd[5865]: Connection closed by 139.178.89.65 port 39092 Dec 12 18:48:08.986962 sshd-session[5862]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:08.998112 systemd[1]: sshd@24-172.31.25.153:22-139.178.89.65:39092.service: Deactivated successfully. Dec 12 18:48:08.998793 systemd-logind[1966]: Session 25 logged out. Waiting for processes to exit. Dec 12 18:48:09.005417 systemd[1]: session-25.scope: Deactivated successfully. Dec 12 18:48:09.012968 systemd-logind[1966]: Removed session 25. Dec 12 18:48:11.211977 kubelet[3543]: E1212 18:48:11.211890 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rtgc" podUID="c534bc62-f909-4723-a1ce-dd8a325ef04d" Dec 12 18:48:14.023181 systemd[1]: Started sshd@25-172.31.25.153:22-139.178.89.65:34364.service - OpenSSH per-connection server daemon (139.178.89.65:34364). Dec 12 18:48:14.214906 sshd[5878]: Accepted publickey for core from 139.178.89.65 port 34364 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:48:14.219097 sshd-session[5878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:14.229310 systemd-logind[1966]: New session 26 of user core. Dec 12 18:48:14.235254 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 12 18:48:14.566291 sshd[5881]: Connection closed by 139.178.89.65 port 34364 Dec 12 18:48:14.567262 sshd-session[5878]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:14.575230 systemd-logind[1966]: Session 26 logged out. Waiting for processes to exit. Dec 12 18:48:14.577778 systemd[1]: sshd@25-172.31.25.153:22-139.178.89.65:34364.service: Deactivated successfully. Dec 12 18:48:14.582698 systemd[1]: session-26.scope: Deactivated successfully. Dec 12 18:48:14.590383 systemd-logind[1966]: Removed session 26. Dec 12 18:48:17.209259 kubelet[3543]: E1212 18:48:17.209209 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sjdx8" podUID="627e8918-ce59-4b1e-a58e-99fb7e0005f5" Dec 12 18:48:19.208896 kubelet[3543]: E1212 18:48:19.208836 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f58b74bcb-s6q4x" podUID="9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d" Dec 12 18:48:19.215330 kubelet[3543]: E1212 18:48:19.215195 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f58b74bcb-ql2z4" podUID="ba2f2d53-b502-4a41-a1a8-fae69661a05c" Dec 12 18:48:19.608883 systemd[1]: Started sshd@26-172.31.25.153:22-139.178.89.65:34374.service - OpenSSH per-connection server daemon (139.178.89.65:34374). Dec 12 18:48:19.830370 sshd[5895]: Accepted publickey for core from 139.178.89.65 port 34374 ssh2: RSA SHA256:Md9biyT+lSBV32yjkc60mead4zeLpJVFu3kVKQ4VNxo Dec 12 18:48:19.833210 sshd-session[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:19.840251 systemd-logind[1966]: New session 27 of user core. Dec 12 18:48:19.847288 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 12 18:48:20.135333 sshd[5898]: Connection closed by 139.178.89.65 port 34374 Dec 12 18:48:20.136140 sshd-session[5895]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:20.143462 systemd-logind[1966]: Session 27 logged out. Waiting for processes to exit. Dec 12 18:48:20.144449 systemd[1]: sshd@26-172.31.25.153:22-139.178.89.65:34374.service: Deactivated successfully. Dec 12 18:48:20.147148 systemd[1]: session-27.scope: Deactivated successfully. Dec 12 18:48:20.152656 systemd-logind[1966]: Removed session 27. Dec 12 18:48:21.209647 kubelet[3543]: E1212 18:48:21.209529 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69dcd64969-ztnlv" podUID="e69104d4-3599-4ed4-87b8-edf0ec255633" Dec 12 18:48:23.210982 kubelet[3543]: E1212 18:48:23.210931 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6448458988-2gsdl" podUID="20a2fd47-4a22-4521-b92b-0d8c954400d5" Dec 12 18:48:24.213451 kubelet[3543]: E1212 18:48:24.213337 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rtgc" podUID="c534bc62-f909-4723-a1ce-dd8a325ef04d" Dec 12 18:48:30.208474 kubelet[3543]: E1212 18:48:30.208353 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sjdx8" podUID="627e8918-ce59-4b1e-a58e-99fb7e0005f5" Dec 12 18:48:32.208693 kubelet[3543]: E1212 18:48:32.208370 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f58b74bcb-ql2z4" podUID="ba2f2d53-b502-4a41-a1a8-fae69661a05c" Dec 12 18:48:33.209745 kubelet[3543]: E1212 18:48:33.209528 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69dcd64969-ztnlv" podUID="e69104d4-3599-4ed4-87b8-edf0ec255633" Dec 12 18:48:34.209394 kubelet[3543]: E1212 18:48:34.209112 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f58b74bcb-s6q4x" podUID="9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d" Dec 12 18:48:34.960001 systemd[1]: cri-containerd-ccb4a587093c096c45b81585a4a583a4f062bb78afdbcf166b934575d7f951e4.scope: Deactivated successfully. Dec 12 18:48:34.962129 systemd[1]: cri-containerd-ccb4a587093c096c45b81585a4a583a4f062bb78afdbcf166b934575d7f951e4.scope: Consumed 11.759s CPU time, 110.2M memory peak, 49M read from disk. Dec 12 18:48:35.040308 containerd[1990]: time="2025-12-12T18:48:35.040247078Z" level=info msg="received container exit event container_id:\"ccb4a587093c096c45b81585a4a583a4f062bb78afdbcf166b934575d7f951e4\" id:\"ccb4a587093c096c45b81585a4a583a4f062bb78afdbcf166b934575d7f951e4\" pid:3871 exit_status:1 exited_at:{seconds:1765565314 nanos:981798907}" Dec 12 18:48:35.095213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ccb4a587093c096c45b81585a4a583a4f062bb78afdbcf166b934575d7f951e4-rootfs.mount: Deactivated successfully. Dec 12 18:48:36.172277 kubelet[3543]: I1212 18:48:36.172229 3543 scope.go:117] "RemoveContainer" containerID="ccb4a587093c096c45b81585a4a583a4f062bb78afdbcf166b934575d7f951e4" Dec 12 18:48:36.178091 containerd[1990]: time="2025-12-12T18:48:36.178029646Z" level=info msg="CreateContainer within sandbox \"cdd4ebb0d862b64e48e6c949dd1a6ad5d946684995a342bab0c83190609397ab\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 12 18:48:36.209171 kubelet[3543]: E1212 18:48:36.209096 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rtgc" podUID="c534bc62-f909-4723-a1ce-dd8a325ef04d" Dec 12 18:48:36.266640 systemd[1]: cri-containerd-16216809fe94cfe4cb73163b5f6fa7e64c5eb9fdb3483ce25b2551d881fdcbbb.scope: Deactivated successfully. Dec 12 18:48:36.267568 systemd[1]: cri-containerd-16216809fe94cfe4cb73163b5f6fa7e64c5eb9fdb3483ce25b2551d881fdcbbb.scope: Consumed 4.020s CPU time, 94.3M memory peak, 62.8M read from disk. Dec 12 18:48:36.273081 containerd[1990]: time="2025-12-12T18:48:36.273010723Z" level=info msg="received container exit event container_id:\"16216809fe94cfe4cb73163b5f6fa7e64c5eb9fdb3483ce25b2551d881fdcbbb\" id:\"16216809fe94cfe4cb73163b5f6fa7e64c5eb9fdb3483ce25b2551d881fdcbbb\" pid:3189 exit_status:1 exited_at:{seconds:1765565316 nanos:271913421}" Dec 12 18:48:36.298206 containerd[1990]: time="2025-12-12T18:48:36.296229994Z" level=info msg="Container b547d844eef90cd62e003ff9fb62b6be0e9474cedd57dc6f08621d5917d58382: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:48:36.305695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1500651140.mount: Deactivated successfully. Dec 12 18:48:36.326455 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16216809fe94cfe4cb73163b5f6fa7e64c5eb9fdb3483ce25b2551d881fdcbbb-rootfs.mount: Deactivated successfully. Dec 12 18:48:36.337858 containerd[1990]: time="2025-12-12T18:48:36.337803406Z" level=info msg="CreateContainer within sandbox \"cdd4ebb0d862b64e48e6c949dd1a6ad5d946684995a342bab0c83190609397ab\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b547d844eef90cd62e003ff9fb62b6be0e9474cedd57dc6f08621d5917d58382\"" Dec 12 18:48:36.338667 containerd[1990]: time="2025-12-12T18:48:36.338448815Z" level=info msg="StartContainer for \"b547d844eef90cd62e003ff9fb62b6be0e9474cedd57dc6f08621d5917d58382\"" Dec 12 18:48:36.341676 containerd[1990]: time="2025-12-12T18:48:36.341631728Z" level=info msg="connecting to shim b547d844eef90cd62e003ff9fb62b6be0e9474cedd57dc6f08621d5917d58382" address="unix:///run/containerd/s/729651bdf164fb477c10c20dcb4bc6db11a1cf9aae6bc7c117f4d584b077c84d" protocol=ttrpc version=3 Dec 12 18:48:36.384305 systemd[1]: Started cri-containerd-b547d844eef90cd62e003ff9fb62b6be0e9474cedd57dc6f08621d5917d58382.scope - libcontainer container b547d844eef90cd62e003ff9fb62b6be0e9474cedd57dc6f08621d5917d58382. Dec 12 18:48:36.442093 containerd[1990]: time="2025-12-12T18:48:36.441927923Z" level=info msg="StartContainer for \"b547d844eef90cd62e003ff9fb62b6be0e9474cedd57dc6f08621d5917d58382\" returns successfully" Dec 12 18:48:37.170538 kubelet[3543]: I1212 18:48:37.170505 3543 scope.go:117] "RemoveContainer" containerID="16216809fe94cfe4cb73163b5f6fa7e64c5eb9fdb3483ce25b2551d881fdcbbb" Dec 12 18:48:37.174265 containerd[1990]: time="2025-12-12T18:48:37.173468706Z" level=info msg="CreateContainer within sandbox \"82fd0960e38c524f58b250cdc28fd376efab2399e4838a628fa199280d2655af\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 12 18:48:37.208600 containerd[1990]: time="2025-12-12T18:48:37.208558995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:48:37.235316 containerd[1990]: time="2025-12-12T18:48:37.235272202Z" level=info msg="Container 7cf435d4b1abeab586b2203abf8863b5c626e271cec0c0c18894d9e68d9cdf4a: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:48:37.253028 containerd[1990]: time="2025-12-12T18:48:37.252979940Z" level=info msg="CreateContainer within sandbox \"82fd0960e38c524f58b250cdc28fd376efab2399e4838a628fa199280d2655af\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"7cf435d4b1abeab586b2203abf8863b5c626e271cec0c0c18894d9e68d9cdf4a\"" Dec 12 18:48:37.255147 containerd[1990]: time="2025-12-12T18:48:37.253822416Z" level=info msg="StartContainer for \"7cf435d4b1abeab586b2203abf8863b5c626e271cec0c0c18894d9e68d9cdf4a\"" Dec 12 18:48:37.255364 containerd[1990]: time="2025-12-12T18:48:37.255336649Z" level=info msg="connecting to shim 7cf435d4b1abeab586b2203abf8863b5c626e271cec0c0c18894d9e68d9cdf4a" address="unix:///run/containerd/s/a732c1178cf2372d5dc40773c1e455fb94a10c5d7bf5fe77b64fbd0d9fe9f8ac" protocol=ttrpc version=3 Dec 12 18:48:37.284613 systemd[1]: Started cri-containerd-7cf435d4b1abeab586b2203abf8863b5c626e271cec0c0c18894d9e68d9cdf4a.scope - libcontainer container 7cf435d4b1abeab586b2203abf8863b5c626e271cec0c0c18894d9e68d9cdf4a. Dec 12 18:48:37.297984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount360074982.mount: Deactivated successfully. Dec 12 18:48:37.376049 containerd[1990]: time="2025-12-12T18:48:37.376006584Z" level=info msg="StartContainer for \"7cf435d4b1abeab586b2203abf8863b5c626e271cec0c0c18894d9e68d9cdf4a\" returns successfully" Dec 12 18:48:37.529865 containerd[1990]: time="2025-12-12T18:48:37.529794454Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:37.533021 containerd[1990]: time="2025-12-12T18:48:37.532956987Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:48:37.533021 containerd[1990]: time="2025-12-12T18:48:37.533075645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:48:37.533620 kubelet[3543]: E1212 18:48:37.533572 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:48:37.534747 kubelet[3543]: E1212 18:48:37.533625 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:48:37.534747 kubelet[3543]: E1212 18:48:37.533862 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:91ec2d7bb4f647fe886f9383a115c758,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5tjf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6448458988-2gsdl_calico-system(20a2fd47-4a22-4521-b92b-0d8c954400d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:37.536302 containerd[1990]: time="2025-12-12T18:48:37.536243602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:48:37.835187 containerd[1990]: time="2025-12-12T18:48:37.835063147Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:37.837202 containerd[1990]: time="2025-12-12T18:48:37.837132500Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:48:37.837202 containerd[1990]: time="2025-12-12T18:48:37.837152521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:48:37.837520 kubelet[3543]: E1212 18:48:37.837417 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:48:37.837601 kubelet[3543]: E1212 18:48:37.837531 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:48:37.837696 kubelet[3543]: E1212 18:48:37.837655 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5tjf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6448458988-2gsdl_calico-system(20a2fd47-4a22-4521-b92b-0d8c954400d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:37.838941 kubelet[3543]: E1212 18:48:37.838864 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6448458988-2gsdl" podUID="20a2fd47-4a22-4521-b92b-0d8c954400d5" Dec 12 18:48:39.615491 systemd[1]: cri-containerd-f217310717c72537c6990e88fc38725edcf16a4b4595c601a6bc4cedf510ed66.scope: Deactivated successfully. Dec 12 18:48:39.616795 systemd[1]: cri-containerd-f217310717c72537c6990e88fc38725edcf16a4b4595c601a6bc4cedf510ed66.scope: Consumed 2.370s CPU time, 38.2M memory peak, 31.7M read from disk. Dec 12 18:48:39.619871 containerd[1990]: time="2025-12-12T18:48:39.619833717Z" level=info msg="received container exit event container_id:\"f217310717c72537c6990e88fc38725edcf16a4b4595c601a6bc4cedf510ed66\" id:\"f217310717c72537c6990e88fc38725edcf16a4b4595c601a6bc4cedf510ed66\" pid:3203 exit_status:1 exited_at:{seconds:1765565319 nanos:618975645}" Dec 12 18:48:39.652549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f217310717c72537c6990e88fc38725edcf16a4b4595c601a6bc4cedf510ed66-rootfs.mount: Deactivated successfully. Dec 12 18:48:40.183811 kubelet[3543]: I1212 18:48:40.183768 3543 scope.go:117] "RemoveContainer" containerID="f217310717c72537c6990e88fc38725edcf16a4b4595c601a6bc4cedf510ed66" Dec 12 18:48:40.186421 containerd[1990]: time="2025-12-12T18:48:40.186380811Z" level=info msg="CreateContainer within sandbox \"4b5574885f72a49dd44e882deab901c8d09d8fc49133f3614002263dc0d91cbd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 12 18:48:40.203072 containerd[1990]: time="2025-12-12T18:48:40.202752988Z" level=info msg="Container c75477bd257da6a9f114d44c99cee52ee901ccc9010bf8469c7b56eb1f726292: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:48:40.230206 containerd[1990]: time="2025-12-12T18:48:40.230163221Z" level=info msg="CreateContainer within sandbox \"4b5574885f72a49dd44e882deab901c8d09d8fc49133f3614002263dc0d91cbd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"c75477bd257da6a9f114d44c99cee52ee901ccc9010bf8469c7b56eb1f726292\"" Dec 12 18:48:40.230714 containerd[1990]: time="2025-12-12T18:48:40.230679668Z" level=info msg="StartContainer for \"c75477bd257da6a9f114d44c99cee52ee901ccc9010bf8469c7b56eb1f726292\"" Dec 12 18:48:40.232571 containerd[1990]: time="2025-12-12T18:48:40.232492873Z" level=info msg="connecting to shim c75477bd257da6a9f114d44c99cee52ee901ccc9010bf8469c7b56eb1f726292" address="unix:///run/containerd/s/e2e59aba8fdf976d3f10471403ae01a5f712cf2ad4fa331add1d8bd1b092d628" protocol=ttrpc version=3 Dec 12 18:48:40.262508 systemd[1]: Started cri-containerd-c75477bd257da6a9f114d44c99cee52ee901ccc9010bf8469c7b56eb1f726292.scope - libcontainer container c75477bd257da6a9f114d44c99cee52ee901ccc9010bf8469c7b56eb1f726292. Dec 12 18:48:40.297688 kubelet[3543]: E1212 18:48:40.282215 3543 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-153?timeout=10s\": context deadline exceeded" Dec 12 18:48:40.349254 containerd[1990]: time="2025-12-12T18:48:40.349127291Z" level=info msg="StartContainer for \"c75477bd257da6a9f114d44c99cee52ee901ccc9010bf8469c7b56eb1f726292\" returns successfully" Dec 12 18:48:41.208993 containerd[1990]: time="2025-12-12T18:48:41.208907024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:48:41.464757 containerd[1990]: time="2025-12-12T18:48:41.464367144Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:41.466700 containerd[1990]: time="2025-12-12T18:48:41.466543704Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:48:41.466700 containerd[1990]: time="2025-12-12T18:48:41.466667183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:48:41.467360 kubelet[3543]: E1212 18:48:41.467320 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:48:41.468220 kubelet[3543]: E1212 18:48:41.467788 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:48:41.468220 kubelet[3543]: E1212 18:48:41.468093 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lsgsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-sjdx8_calico-system(627e8918-ce59-4b1e-a58e-99fb7e0005f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:41.469539 kubelet[3543]: E1212 18:48:41.469461 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sjdx8" podUID="627e8918-ce59-4b1e-a58e-99fb7e0005f5" Dec 12 18:48:44.212595 containerd[1990]: time="2025-12-12T18:48:44.211807703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:48:44.472866 containerd[1990]: time="2025-12-12T18:48:44.472739454Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:44.474989 containerd[1990]: time="2025-12-12T18:48:44.474933835Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:48:44.475186 containerd[1990]: time="2025-12-12T18:48:44.474970867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:48:44.475237 kubelet[3543]: E1212 18:48:44.475209 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:48:44.475648 kubelet[3543]: E1212 18:48:44.475254 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:48:44.475648 kubelet[3543]: E1212 18:48:44.475389 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qj2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-69dcd64969-ztnlv_calico-system(e69104d4-3599-4ed4-87b8-edf0ec255633): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:44.477593 kubelet[3543]: E1212 18:48:44.477555 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69dcd64969-ztnlv" podUID="e69104d4-3599-4ed4-87b8-edf0ec255633" Dec 12 18:48:45.209859 containerd[1990]: time="2025-12-12T18:48:45.209803974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:48:45.544991 containerd[1990]: time="2025-12-12T18:48:45.544931787Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:45.547307 containerd[1990]: time="2025-12-12T18:48:45.547237802Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:48:45.547494 containerd[1990]: time="2025-12-12T18:48:45.547265753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:48:45.547818 kubelet[3543]: E1212 18:48:45.547770 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:48:45.548243 kubelet[3543]: E1212 18:48:45.547846 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:48:45.548243 kubelet[3543]: E1212 18:48:45.548160 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzfq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f58b74bcb-ql2z4_calico-apiserver(ba2f2d53-b502-4a41-a1a8-fae69661a05c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:45.549440 kubelet[3543]: E1212 18:48:45.549380 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f58b74bcb-ql2z4" podUID="ba2f2d53-b502-4a41-a1a8-fae69661a05c" Dec 12 18:48:49.188821 systemd[1]: cri-containerd-b547d844eef90cd62e003ff9fb62b6be0e9474cedd57dc6f08621d5917d58382.scope: Deactivated successfully. Dec 12 18:48:49.190311 systemd[1]: cri-containerd-b547d844eef90cd62e003ff9fb62b6be0e9474cedd57dc6f08621d5917d58382.scope: Consumed 454ms CPU time, 66.1M memory peak, 31.1M read from disk. Dec 12 18:48:49.191111 containerd[1990]: time="2025-12-12T18:48:49.190940652Z" level=info msg="received container exit event container_id:\"b547d844eef90cd62e003ff9fb62b6be0e9474cedd57dc6f08621d5917d58382\" id:\"b547d844eef90cd62e003ff9fb62b6be0e9474cedd57dc6f08621d5917d58382\" pid:5982 exit_status:1 exited_at:{seconds:1765565329 nanos:189881661}" Dec 12 18:48:49.210461 containerd[1990]: time="2025-12-12T18:48:49.210383150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:48:49.261666 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b547d844eef90cd62e003ff9fb62b6be0e9474cedd57dc6f08621d5917d58382-rootfs.mount: Deactivated successfully. Dec 12 18:48:49.478734 containerd[1990]: time="2025-12-12T18:48:49.478604410Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:49.480826 containerd[1990]: time="2025-12-12T18:48:49.480752674Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:48:49.481000 containerd[1990]: time="2025-12-12T18:48:49.480847649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:48:49.481125 kubelet[3543]: E1212 18:48:49.481025 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:48:49.481753 kubelet[3543]: E1212 18:48:49.481133 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:48:49.481809 containerd[1990]: time="2025-12-12T18:48:49.481613320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:48:49.481879 kubelet[3543]: E1212 18:48:49.481448 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sckx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8rtgc_calico-system(c534bc62-f909-4723-a1ce-dd8a325ef04d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:49.770836 containerd[1990]: time="2025-12-12T18:48:49.770775401Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:49.773005 containerd[1990]: time="2025-12-12T18:48:49.772850204Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:48:49.773005 containerd[1990]: time="2025-12-12T18:48:49.772972568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:48:49.773242 kubelet[3543]: E1212 18:48:49.773173 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:48:49.773242 kubelet[3543]: E1212 18:48:49.773226 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:48:49.773892 containerd[1990]: time="2025-12-12T18:48:49.773608511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:48:49.775367 kubelet[3543]: E1212 18:48:49.773786 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8lgzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f58b74bcb-s6q4x_calico-apiserver(9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:49.777388 kubelet[3543]: E1212 18:48:49.777174 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f58b74bcb-s6q4x" podUID="9914e2c9-7a65-4cf8-bb0f-0c43fb4d4b6d" Dec 12 18:48:50.086747 containerd[1990]: time="2025-12-12T18:48:50.086382712Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:48:50.088740 containerd[1990]: time="2025-12-12T18:48:50.088673126Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:48:50.088890 containerd[1990]: time="2025-12-12T18:48:50.088757009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:48:50.089092 kubelet[3543]: E1212 18:48:50.089015 3543 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:48:50.089092 kubelet[3543]: E1212 18:48:50.089095 3543 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:48:50.089241 kubelet[3543]: E1212 18:48:50.089208 3543 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sckx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8rtgc_calico-system(c534bc62-f909-4723-a1ce-dd8a325ef04d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:48:50.090464 kubelet[3543]: E1212 18:48:50.090410 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8rtgc" podUID="c534bc62-f909-4723-a1ce-dd8a325ef04d" Dec 12 18:48:50.239639 kubelet[3543]: I1212 18:48:50.239535 3543 scope.go:117] "RemoveContainer" containerID="ccb4a587093c096c45b81585a4a583a4f062bb78afdbcf166b934575d7f951e4" Dec 12 18:48:50.240060 kubelet[3543]: I1212 18:48:50.239821 3543 scope.go:117] "RemoveContainer" containerID="b547d844eef90cd62e003ff9fb62b6be0e9474cedd57dc6f08621d5917d58382" Dec 12 18:48:50.240741 kubelet[3543]: E1212 18:48:50.240227 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-zp2b6_tigera-operator(c914c620-78a5-4f4f-8ece-7e22d006e732)\"" pod="tigera-operator/tigera-operator-7dcd859c48-zp2b6" podUID="c914c620-78a5-4f4f-8ece-7e22d006e732" Dec 12 18:48:50.267854 containerd[1990]: time="2025-12-12T18:48:50.267816022Z" level=info msg="RemoveContainer for \"ccb4a587093c096c45b81585a4a583a4f062bb78afdbcf166b934575d7f951e4\"" Dec 12 18:48:50.286901 containerd[1990]: time="2025-12-12T18:48:50.286846628Z" level=info msg="RemoveContainer for \"ccb4a587093c096c45b81585a4a583a4f062bb78afdbcf166b934575d7f951e4\" returns successfully" Dec 12 18:48:50.298211 kubelet[3543]: E1212 18:48:50.298076 3543 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-153?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 12 18:48:51.208826 kubelet[3543]: E1212 18:48:51.208774 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6448458988-2gsdl" podUID="20a2fd47-4a22-4521-b92b-0d8c954400d5" Dec 12 18:48:52.208429 kubelet[3543]: E1212 18:48:52.208207 3543 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sjdx8" podUID="627e8918-ce59-4b1e-a58e-99fb7e0005f5"