Dec 16 13:17:16.918588 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 13:17:16.918627 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:17:16.918646 kernel: BIOS-provided physical RAM map: Dec 16 13:17:16.918658 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:17:16.918669 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Dec 16 13:17:16.918680 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Dec 16 13:17:16.918695 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Dec 16 13:17:16.918708 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Dec 16 13:17:16.918720 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Dec 16 13:17:16.918732 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Dec 16 13:17:16.918744 kernel: NX (Execute Disable) protection: active Dec 16 13:17:16.918759 kernel: APIC: Static calls initialized Dec 16 13:17:16.918771 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Dec 16 13:17:16.918784 kernel: extended physical RAM map: Dec 16 13:17:16.918799 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:17:16.918812 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Dec 16 13:17:16.918829 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Dec 16 13:17:16.918842 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Dec 16 13:17:16.918856 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Dec 16 13:17:16.918869 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Dec 16 13:17:16.918882 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Dec 16 13:17:16.918896 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Dec 16 13:17:16.918909 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Dec 16 13:17:16.918922 kernel: efi: EFI v2.7 by EDK II Dec 16 13:17:16.918935 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77002518 Dec 16 13:17:16.918949 kernel: secureboot: Secure boot disabled Dec 16 13:17:16.918962 kernel: SMBIOS 2.7 present. Dec 16 13:17:16.918978 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 16 13:17:16.919000 kernel: DMI: Memory slots populated: 1/1 Dec 16 13:17:16.919013 kernel: Hypervisor detected: KVM Dec 16 13:17:16.919026 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Dec 16 13:17:16.919040 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 13:17:16.919053 kernel: kvm-clock: using sched offset of 5262384705 cycles Dec 16 13:17:16.919068 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 13:17:16.919082 kernel: tsc: Detected 2499.996 MHz processor Dec 16 13:17:16.919095 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:17:16.919109 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:17:16.919125 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Dec 16 13:17:16.919138 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 13:17:16.919153 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:17:16.919172 kernel: Using GB pages for direct mapping Dec 16 13:17:16.919187 kernel: ACPI: Early table checksum verification disabled Dec 16 13:17:16.919202 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Dec 16 13:17:16.919216 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Dec 16 13:17:16.919234 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 16 13:17:16.919249 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 16 13:17:16.919264 kernel: ACPI: FACS 0x00000000789D0000 000040 Dec 16 13:17:16.919279 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 16 13:17:16.919293 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 16 13:17:16.919308 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 16 13:17:16.919342 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 16 13:17:16.919357 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 16 13:17:16.919375 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 16 13:17:16.919390 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 16 13:17:16.919405 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Dec 16 13:17:16.919420 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Dec 16 13:17:16.919434 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Dec 16 13:17:16.919449 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Dec 16 13:17:16.919464 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Dec 16 13:17:16.919478 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Dec 16 13:17:16.919495 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Dec 16 13:17:16.919510 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Dec 16 13:17:16.919525 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Dec 16 13:17:16.919539 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Dec 16 13:17:16.919554 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Dec 16 13:17:16.919569 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Dec 16 13:17:16.919584 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 16 13:17:16.919599 kernel: NUMA: Initialized distance table, cnt=1 Dec 16 13:17:16.919613 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Dec 16 13:17:16.919628 kernel: Zone ranges: Dec 16 13:17:16.919645 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:17:16.919659 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Dec 16 13:17:16.919674 kernel: Normal empty Dec 16 13:17:16.919688 kernel: Device empty Dec 16 13:17:16.919703 kernel: Movable zone start for each node Dec 16 13:17:16.919717 kernel: Early memory node ranges Dec 16 13:17:16.919731 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 16 13:17:16.919746 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Dec 16 13:17:16.919760 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Dec 16 13:17:16.919778 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Dec 16 13:17:16.919793 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:17:16.919808 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 16 13:17:16.919822 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Dec 16 13:17:16.919837 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Dec 16 13:17:16.919851 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 16 13:17:16.919866 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 13:17:16.919881 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 16 13:17:16.919896 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 13:17:16.919913 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:17:16.919928 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 13:17:16.919942 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 13:17:16.919957 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:17:16.919971 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 13:17:16.919986 kernel: TSC deadline timer available Dec 16 13:17:16.920000 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:17:16.920015 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:17:16.920029 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:17:16.920046 kernel: CPU topo: Max. threads per core: 2 Dec 16 13:17:16.920062 kernel: CPU topo: Num. cores per package: 1 Dec 16 13:17:16.920076 kernel: CPU topo: Num. threads per package: 2 Dec 16 13:17:16.920091 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 13:17:16.920106 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 13:17:16.920121 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Dec 16 13:17:16.920135 kernel: Booting paravirtualized kernel on KVM Dec 16 13:17:16.920150 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:17:16.920165 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 13:17:16.920180 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 13:17:16.920197 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 13:17:16.920211 kernel: pcpu-alloc: [0] 0 1 Dec 16 13:17:16.920226 kernel: kvm-guest: PV spinlocks enabled Dec 16 13:17:16.920241 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:17:16.920258 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:17:16.920273 kernel: random: crng init done Dec 16 13:17:16.920288 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 13:17:16.920305 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 13:17:16.924045 kernel: Fallback order for Node 0: 0 Dec 16 13:17:16.924066 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Dec 16 13:17:16.924082 kernel: Policy zone: DMA32 Dec 16 13:17:16.924112 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:17:16.924131 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 13:17:16.924147 kernel: Kernel/User page tables isolation: enabled Dec 16 13:17:16.924164 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:17:16.924180 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:17:16.924196 kernel: Dynamic Preempt: voluntary Dec 16 13:17:16.924211 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:17:16.924229 kernel: rcu: RCU event tracing is enabled. Dec 16 13:17:16.924248 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 13:17:16.924264 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:17:16.924280 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:17:16.924296 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:17:16.924324 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:17:16.924340 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 13:17:16.924360 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:17:16.924376 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:17:16.924392 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:17:16.924408 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 13:17:16.924424 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:17:16.924439 kernel: Console: colour dummy device 80x25 Dec 16 13:17:16.924455 kernel: printk: legacy console [tty0] enabled Dec 16 13:17:16.924471 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:17:16.924489 kernel: ACPI: Core revision 20240827 Dec 16 13:17:16.924506 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 16 13:17:16.924522 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:17:16.924537 kernel: x2apic enabled Dec 16 13:17:16.924553 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:17:16.924569 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 16 13:17:16.924585 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Dec 16 13:17:16.924601 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 16 13:17:16.924617 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Dec 16 13:17:16.924636 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:17:16.924651 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 13:17:16.924667 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:17:16.924682 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 16 13:17:16.924698 kernel: RETBleed: Vulnerable Dec 16 13:17:16.924713 kernel: Speculative Store Bypass: Vulnerable Dec 16 13:17:16.924729 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 13:17:16.924745 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 13:17:16.924760 kernel: GDS: Unknown: Dependent on hypervisor status Dec 16 13:17:16.924776 kernel: active return thunk: its_return_thunk Dec 16 13:17:16.924792 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 13:17:16.924811 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:17:16.924826 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:17:16.924842 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:17:16.924857 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 16 13:17:16.924873 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 16 13:17:16.924889 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 16 13:17:16.924905 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 16 13:17:16.924921 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 16 13:17:16.924936 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 16 13:17:16.924952 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:17:16.924968 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 16 13:17:16.924986 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 16 13:17:16.924999 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 16 13:17:16.925013 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 16 13:17:16.925029 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 16 13:17:16.925045 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 16 13:17:16.925060 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 16 13:17:16.925076 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:17:16.925092 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:17:16.925107 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:17:16.925123 kernel: landlock: Up and running. Dec 16 13:17:16.925138 kernel: SELinux: Initializing. Dec 16 13:17:16.925153 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 13:17:16.925172 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 13:17:16.925187 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 16 13:17:16.925203 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 16 13:17:16.925220 kernel: signal: max sigframe size: 3632 Dec 16 13:17:16.925235 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:17:16.925251 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:17:16.925267 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:17:16.925283 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 13:17:16.925299 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:17:16.925343 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:17:16.925360 kernel: .... node #0, CPUs: #1 Dec 16 13:17:16.925377 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 16 13:17:16.925393 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 16 13:17:16.925409 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 13:17:16.925425 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Dec 16 13:17:16.925441 kernel: Memory: 1899860K/2037804K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 133380K reserved, 0K cma-reserved) Dec 16 13:17:16.925457 kernel: devtmpfs: initialized Dec 16 13:17:16.925473 kernel: x86/mm: Memory block size: 128MB Dec 16 13:17:16.925490 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Dec 16 13:17:16.925503 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:17:16.925515 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 13:17:16.925530 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:17:16.925551 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:17:16.925571 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:17:16.925592 kernel: audit: type=2000 audit(1765891034.654:1): state=initialized audit_enabled=0 res=1 Dec 16 13:17:16.925613 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:17:16.925639 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:17:16.925659 kernel: cpuidle: using governor menu Dec 16 13:17:16.925680 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:17:16.925700 kernel: dca service started, version 1.12.1 Dec 16 13:17:16.925715 kernel: PCI: Using configuration type 1 for base access Dec 16 13:17:16.925730 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:17:16.925745 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:17:16.925761 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:17:16.925777 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:17:16.925796 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:17:16.925811 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:17:16.925827 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:17:16.925843 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:17:16.925859 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 16 13:17:16.925874 kernel: ACPI: Interpreter enabled Dec 16 13:17:16.925889 kernel: ACPI: PM: (supports S0 S5) Dec 16 13:17:16.925905 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:17:16.925921 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:17:16.925940 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 13:17:16.925955 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 16 13:17:16.925971 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 13:17:16.926220 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 16 13:17:16.927187 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 16 13:17:16.927359 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 16 13:17:16.927380 kernel: acpiphp: Slot [3] registered Dec 16 13:17:16.927396 kernel: acpiphp: Slot [4] registered Dec 16 13:17:16.927419 kernel: acpiphp: Slot [5] registered Dec 16 13:17:16.927435 kernel: acpiphp: Slot [6] registered Dec 16 13:17:16.927450 kernel: acpiphp: Slot [7] registered Dec 16 13:17:16.927466 kernel: acpiphp: Slot [8] registered Dec 16 13:17:16.927481 kernel: acpiphp: Slot [9] registered Dec 16 13:17:16.927497 kernel: acpiphp: Slot [10] registered Dec 16 13:17:16.927512 kernel: acpiphp: Slot [11] registered Dec 16 13:17:16.927528 kernel: acpiphp: Slot [12] registered Dec 16 13:17:16.927544 kernel: acpiphp: Slot [13] registered Dec 16 13:17:16.927562 kernel: acpiphp: Slot [14] registered Dec 16 13:17:16.927577 kernel: acpiphp: Slot [15] registered Dec 16 13:17:16.927593 kernel: acpiphp: Slot [16] registered Dec 16 13:17:16.927608 kernel: acpiphp: Slot [17] registered Dec 16 13:17:16.927624 kernel: acpiphp: Slot [18] registered Dec 16 13:17:16.927639 kernel: acpiphp: Slot [19] registered Dec 16 13:17:16.927655 kernel: acpiphp: Slot [20] registered Dec 16 13:17:16.927670 kernel: acpiphp: Slot [21] registered Dec 16 13:17:16.927685 kernel: acpiphp: Slot [22] registered Dec 16 13:17:16.927701 kernel: acpiphp: Slot [23] registered Dec 16 13:17:16.927719 kernel: acpiphp: Slot [24] registered Dec 16 13:17:16.927734 kernel: acpiphp: Slot [25] registered Dec 16 13:17:16.927750 kernel: acpiphp: Slot [26] registered Dec 16 13:17:16.927765 kernel: acpiphp: Slot [27] registered Dec 16 13:17:16.927780 kernel: acpiphp: Slot [28] registered Dec 16 13:17:16.927795 kernel: acpiphp: Slot [29] registered Dec 16 13:17:16.927811 kernel: acpiphp: Slot [30] registered Dec 16 13:17:16.927826 kernel: acpiphp: Slot [31] registered Dec 16 13:17:16.927842 kernel: PCI host bridge to bus 0000:00 Dec 16 13:17:16.927983 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 13:17:16.928110 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 13:17:16.928233 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 13:17:16.929393 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 16 13:17:16.929535 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Dec 16 13:17:16.929661 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 13:17:16.929833 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Dec 16 13:17:16.929986 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Dec 16 13:17:16.930131 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Dec 16 13:17:16.930269 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 16 13:17:16.930434 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 16 13:17:16.930572 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 16 13:17:16.930710 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 16 13:17:16.930850 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 16 13:17:16.930997 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 16 13:17:16.931136 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 16 13:17:16.931279 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Dec 16 13:17:16.931502 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Dec 16 13:17:16.931640 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 16 13:17:16.931774 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 13:17:16.931927 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Dec 16 13:17:16.932063 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Dec 16 13:17:16.932204 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Dec 16 13:17:16.932356 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Dec 16 13:17:16.932377 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 13:17:16.932393 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 13:17:16.932409 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 13:17:16.932429 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 13:17:16.932445 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 16 13:17:16.932461 kernel: iommu: Default domain type: Translated Dec 16 13:17:16.932477 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:17:16.932492 kernel: efivars: Registered efivars operations Dec 16 13:17:16.932508 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:17:16.932524 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 13:17:16.932540 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Dec 16 13:17:16.932555 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Dec 16 13:17:16.932572 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Dec 16 13:17:16.932705 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 16 13:17:16.932840 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 16 13:17:16.932976 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 13:17:16.932994 kernel: vgaarb: loaded Dec 16 13:17:16.933008 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 16 13:17:16.933022 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 16 13:17:16.933035 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 13:17:16.933049 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:17:16.933066 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:17:16.936505 kernel: pnp: PnP ACPI init Dec 16 13:17:16.936530 kernel: pnp: PnP ACPI: found 5 devices Dec 16 13:17:16.936547 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:17:16.936563 kernel: NET: Registered PF_INET protocol family Dec 16 13:17:16.936578 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 13:17:16.936594 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 13:17:16.936609 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:17:16.936624 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:17:16.936646 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 13:17:16.936661 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 13:17:16.936676 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 13:17:16.936692 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 13:17:16.936707 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:17:16.936721 kernel: NET: Registered PF_XDP protocol family Dec 16 13:17:16.936898 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 13:17:16.937019 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 13:17:16.937145 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 13:17:16.937262 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 16 13:17:16.937402 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Dec 16 13:17:16.937551 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 16 13:17:16.937571 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:17:16.937586 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 16 13:17:16.937601 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 16 13:17:16.937616 kernel: clocksource: Switched to clocksource tsc Dec 16 13:17:16.937630 kernel: Initialise system trusted keyrings Dec 16 13:17:16.937649 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 13:17:16.937664 kernel: Key type asymmetric registered Dec 16 13:17:16.937678 kernel: Asymmetric key parser 'x509' registered Dec 16 13:17:16.937692 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:17:16.937708 kernel: io scheduler mq-deadline registered Dec 16 13:17:16.937723 kernel: io scheduler kyber registered Dec 16 13:17:16.937738 kernel: io scheduler bfq registered Dec 16 13:17:16.937752 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:17:16.937767 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:17:16.937785 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:17:16.937800 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 13:17:16.937815 kernel: i8042: Warning: Keylock active Dec 16 13:17:16.937829 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 13:17:16.937845 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 13:17:16.937987 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 16 13:17:16.938109 kernel: rtc_cmos 00:00: registered as rtc0 Dec 16 13:17:16.938229 kernel: rtc_cmos 00:00: setting system clock to 2025-12-16T13:17:16 UTC (1765891036) Dec 16 13:17:16.938374 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 16 13:17:16.938417 kernel: intel_pstate: CPU model not supported Dec 16 13:17:16.938436 kernel: efifb: probing for efifb Dec 16 13:17:16.938452 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Dec 16 13:17:16.938468 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Dec 16 13:17:16.938485 kernel: efifb: scrolling: redraw Dec 16 13:17:16.938501 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 13:17:16.938516 kernel: Console: switching to colour frame buffer device 100x37 Dec 16 13:17:16.938536 kernel: fb0: EFI VGA frame buffer device Dec 16 13:17:16.938553 kernel: pstore: Using crash dump compression: deflate Dec 16 13:17:16.938569 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 13:17:16.938584 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:17:16.938601 kernel: Segment Routing with IPv6 Dec 16 13:17:16.938617 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:17:16.938633 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:17:16.938649 kernel: Key type dns_resolver registered Dec 16 13:17:16.938665 kernel: IPI shorthand broadcast: enabled Dec 16 13:17:16.938681 kernel: sched_clock: Marking stable (2584002045, 159744654)->(2832415006, -88668307) Dec 16 13:17:16.938700 kernel: registered taskstats version 1 Dec 16 13:17:16.938715 kernel: Loading compiled-in X.509 certificates Dec 16 13:17:16.938730 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 13:17:16.938747 kernel: Demotion targets for Node 0: null Dec 16 13:17:16.938764 kernel: Key type .fscrypt registered Dec 16 13:17:16.938780 kernel: Key type fscrypt-provisioning registered Dec 16 13:17:16.938797 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:17:16.938815 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:17:16.938832 kernel: ima: No architecture policies found Dec 16 13:17:16.938853 kernel: clk: Disabling unused clocks Dec 16 13:17:16.938872 kernel: Warning: unable to open an initial console. Dec 16 13:17:16.938890 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 13:17:16.938908 kernel: Write protecting the kernel read-only data: 40960k Dec 16 13:17:16.938929 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 13:17:16.938949 kernel: Run /init as init process Dec 16 13:17:16.938968 kernel: with arguments: Dec 16 13:17:16.938996 kernel: /init Dec 16 13:17:16.939013 kernel: with environment: Dec 16 13:17:16.939030 kernel: HOME=/ Dec 16 13:17:16.939046 kernel: TERM=linux Dec 16 13:17:16.939064 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:17:16.939086 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:17:16.939107 systemd[1]: Detected virtualization amazon. Dec 16 13:17:16.939123 systemd[1]: Detected architecture x86-64. Dec 16 13:17:16.939139 systemd[1]: Running in initrd. Dec 16 13:17:16.939154 systemd[1]: No hostname configured, using default hostname. Dec 16 13:17:16.939170 systemd[1]: Hostname set to . Dec 16 13:17:16.939187 systemd[1]: Initializing machine ID from VM UUID. Dec 16 13:17:16.939202 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:17:16.939219 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:17:16.939238 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:17:16.939257 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:17:16.939273 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:17:16.939290 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:17:16.939308 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:17:16.939344 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 13:17:16.939365 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 13:17:16.939381 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:17:16.939397 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:17:16.939416 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:17:16.939432 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:17:16.939448 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:17:16.939463 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:17:16.939479 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:17:16.939495 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:17:16.939514 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:17:16.939530 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:17:16.939546 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:17:16.939562 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:17:16.939578 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:17:16.939594 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:17:16.939610 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:17:16.939627 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:17:16.939642 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:17:16.939661 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:17:16.939678 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:17:16.939693 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:17:16.939710 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:17:16.939726 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:17:16.939742 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:17:16.939762 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:17:16.939778 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:17:16.939794 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:17:16.939847 systemd-journald[188]: Collecting audit messages is disabled. Dec 16 13:17:16.939888 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:17:16.939904 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:17:16.939921 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:17:16.939939 systemd-journald[188]: Journal started Dec 16 13:17:16.939975 systemd-journald[188]: Runtime Journal (/run/log/journal/ec29a6bcab70690857eddb7d39e8e068) is 4.7M, max 38.1M, 33.3M free. Dec 16 13:17:16.905693 systemd-modules-load[189]: Inserted module 'overlay' Dec 16 13:17:16.946336 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:17:16.954345 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:17:16.960291 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:17:16.969346 kernel: Bridge firewalling registered Dec 16 13:17:16.968813 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:17:16.969665 systemd-modules-load[189]: Inserted module 'br_netfilter' Dec 16 13:17:16.973841 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:17:16.975518 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:17:16.980599 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:17:16.986861 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:17:16.989564 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:17:16.999531 systemd-tmpfiles[211]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:17:17.008169 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 13:17:17.006786 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:17:17.009301 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:17:17.015481 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:17:17.023190 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:17:17.075691 systemd-resolved[231]: Positive Trust Anchors: Dec 16 13:17:17.076737 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:17:17.076801 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:17:17.083974 systemd-resolved[231]: Defaulting to hostname 'linux'. Dec 16 13:17:17.087294 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:17:17.088038 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:17:17.124390 kernel: SCSI subsystem initialized Dec 16 13:17:17.134340 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:17:17.145351 kernel: iscsi: registered transport (tcp) Dec 16 13:17:17.167549 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:17:17.167627 kernel: QLogic iSCSI HBA Driver Dec 16 13:17:17.186623 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:17:17.214939 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:17:17.217616 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:17:17.265157 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:17:17.267467 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:17:17.334361 kernel: raid6: avx512x4 gen() 17901 MB/s Dec 16 13:17:17.352343 kernel: raid6: avx512x2 gen() 17989 MB/s Dec 16 13:17:17.370346 kernel: raid6: avx512x1 gen() 17960 MB/s Dec 16 13:17:17.388346 kernel: raid6: avx2x4 gen() 17712 MB/s Dec 16 13:17:17.406347 kernel: raid6: avx2x2 gen() 17635 MB/s Dec 16 13:17:17.424610 kernel: raid6: avx2x1 gen() 13601 MB/s Dec 16 13:17:17.424670 kernel: raid6: using algorithm avx512x2 gen() 17989 MB/s Dec 16 13:17:17.443608 kernel: raid6: .... xor() 24253 MB/s, rmw enabled Dec 16 13:17:17.443680 kernel: raid6: using avx512x2 recovery algorithm Dec 16 13:17:17.465348 kernel: xor: automatically using best checksumming function avx Dec 16 13:17:17.633346 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:17:17.640103 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:17:17.642467 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:17:17.675699 systemd-udevd[437]: Using default interface naming scheme 'v255'. Dec 16 13:17:17.683207 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:17:17.687474 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:17:17.712168 dracut-pre-trigger[442]: rd.md=0: removing MD RAID activation Dec 16 13:17:17.741518 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:17:17.743724 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:17:17.831489 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:17:17.835354 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:17:17.915345 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 16 13:17:17.915587 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 16 13:17:17.922435 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 16 13:17:17.932340 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:63:25:1c:99:eb Dec 16 13:17:17.947336 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:17:17.953343 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 16 13:17:17.956390 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 16 13:17:17.968337 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 13:17:17.966816 (udev-worker)[486]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:17:17.981342 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 16 13:17:17.982552 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:17:17.982905 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:17:17.983941 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:17:17.991602 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 13:17:17.991671 kernel: GPT:9289727 != 33554431 Dec 16 13:17:17.991692 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 13:17:17.991710 kernel: GPT:9289727 != 33554431 Dec 16 13:17:17.991728 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 13:17:17.991754 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:17:17.985949 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:17:17.991947 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:17:18.017366 kernel: AES CTR mode by8 optimization enabled Dec 16 13:17:18.047671 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:17:18.055341 kernel: nvme nvme0: using unchecked data buffer Dec 16 13:17:18.188597 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 13:17:18.199124 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:17:18.210866 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 16 13:17:18.223120 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 16 13:17:18.232798 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 16 13:17:18.233466 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 16 13:17:18.234945 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:17:18.236191 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:17:18.237328 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:17:18.239063 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:17:18.244426 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:17:18.269344 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:17:18.269475 disk-uuid[673]: Primary Header is updated. Dec 16 13:17:18.269475 disk-uuid[673]: Secondary Entries is updated. Dec 16 13:17:18.269475 disk-uuid[673]: Secondary Header is updated. Dec 16 13:17:18.269855 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:17:19.292387 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:17:19.292631 disk-uuid[680]: The operation has completed successfully. Dec 16 13:17:19.421879 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:17:19.422003 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:17:19.469942 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 13:17:19.483999 sh[941]: Success Dec 16 13:17:19.504642 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:17:19.504746 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:17:19.506677 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:17:19.518351 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Dec 16 13:17:19.601408 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:17:19.605404 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 13:17:19.615162 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 13:17:19.643371 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (976) Dec 16 13:17:19.646429 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 13:17:19.646492 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:17:19.733045 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 13:17:19.733114 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:17:19.733128 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:17:19.744482 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 13:17:19.745436 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:17:19.745988 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:17:19.746762 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:17:19.748417 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:17:19.779375 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1009) Dec 16 13:17:19.784093 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:17:19.784155 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:17:19.792528 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:17:19.792612 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:17:19.800378 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:17:19.801976 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:17:19.804824 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:17:19.856529 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:17:19.859734 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:17:19.912726 systemd-networkd[1145]: lo: Link UP Dec 16 13:17:19.912738 systemd-networkd[1145]: lo: Gained carrier Dec 16 13:17:19.914553 systemd-networkd[1145]: Enumeration completed Dec 16 13:17:19.914688 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:17:19.915171 systemd-networkd[1145]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:17:19.915177 systemd-networkd[1145]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:17:19.916613 systemd[1]: Reached target network.target - Network. Dec 16 13:17:19.918778 systemd-networkd[1145]: eth0: Link UP Dec 16 13:17:19.918784 systemd-networkd[1145]: eth0: Gained carrier Dec 16 13:17:19.918801 systemd-networkd[1145]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:17:19.927410 systemd-networkd[1145]: eth0: DHCPv4 address 172.31.26.5/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 13:17:20.216627 ignition[1080]: Ignition 2.22.0 Dec 16 13:17:20.216642 ignition[1080]: Stage: fetch-offline Dec 16 13:17:20.216812 ignition[1080]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:17:20.216820 ignition[1080]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:17:20.217495 ignition[1080]: Ignition finished successfully Dec 16 13:17:20.218836 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:17:20.220488 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 13:17:20.257636 ignition[1155]: Ignition 2.22.0 Dec 16 13:17:20.257651 ignition[1155]: Stage: fetch Dec 16 13:17:20.257940 ignition[1155]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:17:20.257948 ignition[1155]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:17:20.258026 ignition[1155]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:17:20.265795 ignition[1155]: PUT result: OK Dec 16 13:17:20.267567 ignition[1155]: parsed url from cmdline: "" Dec 16 13:17:20.267579 ignition[1155]: no config URL provided Dec 16 13:17:20.267586 ignition[1155]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:17:20.267598 ignition[1155]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:17:20.267627 ignition[1155]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:17:20.268189 ignition[1155]: PUT result: OK Dec 16 13:17:20.268240 ignition[1155]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 16 13:17:20.268800 ignition[1155]: GET result: OK Dec 16 13:17:20.268890 ignition[1155]: parsing config with SHA512: 618981911748959472194ec14c6fff58a28788cdc18ab054a289f28086fb585ca3de5facd0aea5de28916723c9da695c318702dec5f56764ff766780726747d1 Dec 16 13:17:20.274691 unknown[1155]: fetched base config from "system" Dec 16 13:17:20.275283 unknown[1155]: fetched base config from "system" Dec 16 13:17:20.275622 ignition[1155]: fetch: fetch complete Dec 16 13:17:20.275290 unknown[1155]: fetched user config from "aws" Dec 16 13:17:20.275627 ignition[1155]: fetch: fetch passed Dec 16 13:17:20.278260 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 13:17:20.275673 ignition[1155]: Ignition finished successfully Dec 16 13:17:20.280241 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:17:20.314419 ignition[1162]: Ignition 2.22.0 Dec 16 13:17:20.314434 ignition[1162]: Stage: kargs Dec 16 13:17:20.314810 ignition[1162]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:17:20.314821 ignition[1162]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:17:20.314924 ignition[1162]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:17:20.316057 ignition[1162]: PUT result: OK Dec 16 13:17:20.318725 ignition[1162]: kargs: kargs passed Dec 16 13:17:20.318813 ignition[1162]: Ignition finished successfully Dec 16 13:17:20.321329 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:17:20.322859 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:17:20.358361 ignition[1168]: Ignition 2.22.0 Dec 16 13:17:20.358377 ignition[1168]: Stage: disks Dec 16 13:17:20.358788 ignition[1168]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:17:20.358800 ignition[1168]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:17:20.358918 ignition[1168]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:17:20.359961 ignition[1168]: PUT result: OK Dec 16 13:17:20.363688 ignition[1168]: disks: disks passed Dec 16 13:17:20.364341 ignition[1168]: Ignition finished successfully Dec 16 13:17:20.366226 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:17:20.366933 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:17:20.367512 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:17:20.368097 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:17:20.368707 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:17:20.369283 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:17:20.371154 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:17:20.421619 systemd-fsck[1176]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 13:17:20.424260 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:17:20.425986 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:17:20.584336 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 13:17:20.585351 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:17:20.586276 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:17:20.588905 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:17:20.591413 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:17:20.592743 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 13:17:20.593158 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:17:20.593186 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:17:20.601741 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:17:20.604218 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:17:20.616348 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1195) Dec 16 13:17:20.621108 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:17:20.621167 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:17:20.627995 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:17:20.628064 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:17:20.629675 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:17:20.883377 initrd-setup-root[1221]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:17:20.902851 initrd-setup-root[1228]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:17:20.908183 initrd-setup-root[1235]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:17:20.912995 initrd-setup-root[1242]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:17:21.107238 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:17:21.109400 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:17:21.112500 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:17:21.129680 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:17:21.131875 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:17:21.161642 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:17:21.168998 ignition[1310]: INFO : Ignition 2.22.0 Dec 16 13:17:21.168998 ignition[1310]: INFO : Stage: mount Dec 16 13:17:21.170692 ignition[1310]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:17:21.170692 ignition[1310]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:17:21.170692 ignition[1310]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:17:21.172397 ignition[1310]: INFO : PUT result: OK Dec 16 13:17:21.173295 ignition[1310]: INFO : mount: mount passed Dec 16 13:17:21.174397 ignition[1310]: INFO : Ignition finished successfully Dec 16 13:17:21.175297 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:17:21.177199 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:17:21.197356 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:17:21.225350 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1321) Dec 16 13:17:21.228365 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:17:21.228420 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:17:21.236752 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:17:21.236825 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:17:21.239250 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:17:21.277007 ignition[1338]: INFO : Ignition 2.22.0 Dec 16 13:17:21.277007 ignition[1338]: INFO : Stage: files Dec 16 13:17:21.278850 ignition[1338]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:17:21.278850 ignition[1338]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:17:21.278850 ignition[1338]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:17:21.278850 ignition[1338]: INFO : PUT result: OK Dec 16 13:17:21.281924 ignition[1338]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:17:21.283053 ignition[1338]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:17:21.284113 ignition[1338]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:17:21.287080 ignition[1338]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:17:21.287990 ignition[1338]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:17:21.288956 ignition[1338]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:17:21.288516 unknown[1338]: wrote ssh authorized keys file for user: core Dec 16 13:17:21.291282 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 13:17:21.291983 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 16 13:17:21.352354 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:17:21.480865 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 13:17:21.480865 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:17:21.482472 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:17:21.482472 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:17:21.482472 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:17:21.482472 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:17:21.482472 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:17:21.482472 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:17:21.482472 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:17:21.487825 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:17:21.487825 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:17:21.487825 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:17:21.490565 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:17:21.490565 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:17:21.490565 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 16 13:17:21.611474 systemd-networkd[1145]: eth0: Gained IPv6LL Dec 16 13:17:21.968603 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 13:17:22.299246 ignition[1338]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:17:22.299246 ignition[1338]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 13:17:22.308642 ignition[1338]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:17:22.313467 ignition[1338]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:17:22.313467 ignition[1338]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 13:17:22.313467 ignition[1338]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:17:22.317191 ignition[1338]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:17:22.317191 ignition[1338]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:17:22.317191 ignition[1338]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:17:22.317191 ignition[1338]: INFO : files: files passed Dec 16 13:17:22.317191 ignition[1338]: INFO : Ignition finished successfully Dec 16 13:17:22.316503 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:17:22.321492 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:17:22.326423 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:17:22.337178 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:17:22.338001 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:17:22.343990 initrd-setup-root-after-ignition[1368]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:17:22.345858 initrd-setup-root-after-ignition[1368]: grep: Dec 16 13:17:22.346569 initrd-setup-root-after-ignition[1372]: grep: Dec 16 13:17:22.347394 initrd-setup-root-after-ignition[1368]: /sysroot/usr/share/flatcar/enabled-sysext.conf Dec 16 13:17:22.348360 initrd-setup-root-after-ignition[1372]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:17:22.347648 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:17:22.350243 initrd-setup-root-after-ignition[1368]: : No such file or directory Dec 16 13:17:22.349280 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:17:22.352175 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:17:22.427746 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:17:22.427858 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:17:22.429230 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:17:22.429903 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:17:22.430643 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:17:22.431613 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:17:22.462802 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:17:22.465125 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:17:22.490563 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:17:22.491392 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:17:22.492469 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:17:22.493355 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:17:22.493588 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:17:22.494761 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:17:22.495813 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:17:22.496613 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:17:22.497414 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:17:22.498197 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:17:22.498942 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:17:22.499893 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:17:22.500679 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:17:22.501483 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:17:22.502659 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:17:22.503604 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:17:22.504351 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:17:22.504599 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:17:22.505658 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:17:22.506493 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:17:22.507296 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:17:22.507646 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:17:22.508161 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:17:22.508403 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:17:22.509443 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:17:22.509687 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:17:22.510272 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:17:22.510442 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:17:22.513438 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:17:22.514258 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:17:22.514505 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:17:22.518405 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:17:22.519773 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:17:22.520626 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:17:22.521992 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:17:22.522805 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:17:22.529577 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:17:22.530410 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:17:22.555497 ignition[1392]: INFO : Ignition 2.22.0 Dec 16 13:17:22.555497 ignition[1392]: INFO : Stage: umount Dec 16 13:17:22.557228 ignition[1392]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:17:22.557228 ignition[1392]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:17:22.557228 ignition[1392]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:17:22.559354 ignition[1392]: INFO : PUT result: OK Dec 16 13:17:22.559844 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:17:22.562455 ignition[1392]: INFO : umount: umount passed Dec 16 13:17:22.562455 ignition[1392]: INFO : Ignition finished successfully Dec 16 13:17:22.565063 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:17:22.565172 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:17:22.566486 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:17:22.566572 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:17:22.567424 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:17:22.567486 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:17:22.568061 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 13:17:22.568120 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 13:17:22.568741 systemd[1]: Stopped target network.target - Network. Dec 16 13:17:22.569407 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:17:22.569470 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:17:22.570056 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:17:22.570635 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:17:22.574389 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:17:22.574812 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:17:22.575832 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:17:22.576492 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:17:22.576544 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:17:22.577104 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:17:22.577150 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:17:22.577755 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:17:22.577834 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:17:22.578428 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:17:22.578488 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:17:22.579366 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:17:22.579988 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:17:22.583885 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:17:22.584041 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:17:22.588226 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 13:17:22.589151 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:17:22.589267 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:17:22.592248 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:17:22.593690 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:17:22.593843 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:17:22.596021 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 13:17:22.596348 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:17:22.597272 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:17:22.597423 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:17:22.599072 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:17:22.601420 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:17:22.601500 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:17:22.602372 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:17:22.602440 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:17:22.604364 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:17:22.604430 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:17:22.605309 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:17:22.611861 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:17:22.622755 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:17:22.623035 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:17:22.624714 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:17:22.624811 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:17:22.625810 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:17:22.625859 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:17:22.626668 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:17:22.626738 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:17:22.629239 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:17:22.629309 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:17:22.631147 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:17:22.631218 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:17:22.633371 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:17:22.636148 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:17:22.636235 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:17:22.640304 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:17:22.640459 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:17:22.641250 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:17:22.641337 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:17:22.643514 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:17:22.645454 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:17:22.651459 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:17:22.651611 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:17:22.704578 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:17:22.704693 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:17:22.705878 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:17:22.706405 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:17:22.706467 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:17:22.707953 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:17:22.720764 systemd[1]: Switching root. Dec 16 13:17:22.762373 systemd-journald[188]: Journal stopped Dec 16 13:17:24.405190 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Dec 16 13:17:24.405270 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:17:24.405297 kernel: SELinux: policy capability open_perms=1 Dec 16 13:17:24.405329 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:17:24.405348 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:17:24.405366 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:17:24.405385 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:17:24.405403 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:17:24.405425 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:17:24.405443 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:17:24.405461 kernel: audit: type=1403 audit(1765891043.104:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 13:17:24.405485 systemd[1]: Successfully loaded SELinux policy in 89.512ms. Dec 16 13:17:24.405520 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.532ms. Dec 16 13:17:24.405543 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:17:24.405563 systemd[1]: Detected virtualization amazon. Dec 16 13:17:24.405583 systemd[1]: Detected architecture x86-64. Dec 16 13:17:24.405605 systemd[1]: Detected first boot. Dec 16 13:17:24.405625 systemd[1]: Initializing machine ID from VM UUID. Dec 16 13:17:24.405644 zram_generator::config[1437]: No configuration found. Dec 16 13:17:24.405663 kernel: Guest personality initialized and is inactive Dec 16 13:17:24.405681 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 13:17:24.405699 kernel: Initialized host personality Dec 16 13:17:24.405716 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:17:24.405734 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:17:24.405755 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 13:17:24.405777 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:17:24.405801 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:17:24.405821 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:17:24.405840 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:17:24.405860 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:17:24.405879 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:17:24.405902 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:17:24.405924 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:17:24.405945 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:17:24.405965 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:17:24.405985 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:17:24.406004 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:17:24.406025 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:17:24.406047 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:17:24.406066 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:17:24.406085 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:17:24.406108 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:17:24.406127 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:17:24.406146 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:17:24.406166 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:17:24.406185 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:17:24.406205 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:17:24.406224 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:17:24.406243 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:17:24.406262 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:17:24.406285 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:17:24.406304 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:17:24.408392 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:17:24.408421 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:17:24.408442 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:17:24.408462 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:17:24.408482 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:17:24.408502 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:17:24.408522 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:17:24.408548 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:17:24.408568 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:17:24.408587 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:17:24.408607 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:17:24.408627 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:17:24.408647 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:17:24.408666 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:17:24.408685 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:17:24.408706 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:17:24.408729 systemd[1]: Reached target machines.target - Containers. Dec 16 13:17:24.408749 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:17:24.408768 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:17:24.408788 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:17:24.408810 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:17:24.408830 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:17:24.408850 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:17:24.408869 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:17:24.408891 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:17:24.408911 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:17:24.408930 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:17:24.408950 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:17:24.408969 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:17:24.408988 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:17:24.409009 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:17:24.409029 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:17:24.409051 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:17:24.409071 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:17:24.409090 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:17:24.409110 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:17:24.409130 kernel: loop: module loaded Dec 16 13:17:24.409150 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:17:24.409173 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:17:24.409193 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 13:17:24.409212 systemd[1]: Stopped verity-setup.service. Dec 16 13:17:24.409233 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:17:24.409252 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:17:24.409272 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:17:24.409292 kernel: fuse: init (API version 7.41) Dec 16 13:17:24.409328 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:17:24.409348 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:17:24.409369 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:17:24.409390 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:17:24.409410 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:17:24.409430 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:17:24.409450 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:17:24.409473 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:17:24.409492 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:17:24.409512 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:17:24.409532 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:17:24.409592 systemd-journald[1520]: Collecting audit messages is disabled. Dec 16 13:17:24.409630 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:17:24.409650 systemd-journald[1520]: Journal started Dec 16 13:17:24.409686 systemd-journald[1520]: Runtime Journal (/run/log/journal/ec29a6bcab70690857eddb7d39e8e068) is 4.7M, max 38.1M, 33.3M free. Dec 16 13:17:24.038395 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:17:24.057644 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 16 13:17:24.058167 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:17:24.420871 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:17:24.420970 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:17:24.419826 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:17:24.420131 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:17:24.422376 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:17:24.424373 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:17:24.427533 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:17:24.455082 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:17:24.461435 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:17:24.466423 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:17:24.469426 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:17:24.469483 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:17:24.473865 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:17:24.482540 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:17:24.484158 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:17:24.490550 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:17:24.497673 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:17:24.498604 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:17:24.504553 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:17:24.505292 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:17:24.507254 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:17:24.511477 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:17:24.513450 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:17:24.516883 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:17:24.518206 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:17:24.548603 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:17:24.557568 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:17:24.562792 kernel: loop0: detected capacity change from 0 to 72368 Dec 16 13:17:24.562872 kernel: ACPI: bus type drm_connector registered Dec 16 13:17:24.565767 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:17:24.568420 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:17:24.579018 systemd-journald[1520]: Time spent on flushing to /var/log/journal/ec29a6bcab70690857eddb7d39e8e068 is 34.226ms for 1016 entries. Dec 16 13:17:24.579018 systemd-journald[1520]: System Journal (/var/log/journal/ec29a6bcab70690857eddb7d39e8e068) is 8M, max 195.6M, 187.6M free. Dec 16 13:17:24.650646 systemd-journald[1520]: Received client request to flush runtime journal. Dec 16 13:17:24.583730 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:17:24.592399 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:17:24.593185 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:17:24.596800 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:17:24.641429 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:17:24.658882 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:17:24.683352 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:17:24.690698 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:17:24.706893 kernel: loop1: detected capacity change from 0 to 224512 Dec 16 13:17:24.718274 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:17:24.723454 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:17:24.767340 kernel: loop2: detected capacity change from 0 to 110984 Dec 16 13:17:24.776253 systemd-tmpfiles[1588]: ACLs are not supported, ignoring. Dec 16 13:17:24.777000 systemd-tmpfiles[1588]: ACLs are not supported, ignoring. Dec 16 13:17:24.784876 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:17:24.836428 kernel: loop3: detected capacity change from 0 to 128560 Dec 16 13:17:24.965339 kernel: loop4: detected capacity change from 0 to 72368 Dec 16 13:17:25.002485 kernel: loop5: detected capacity change from 0 to 224512 Dec 16 13:17:25.037840 kernel: loop6: detected capacity change from 0 to 110984 Dec 16 13:17:25.060703 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:17:25.061953 kernel: loop7: detected capacity change from 0 to 128560 Dec 16 13:17:25.080992 (sd-merge)[1594]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 16 13:17:25.082291 (sd-merge)[1594]: Merged extensions into '/usr'. Dec 16 13:17:25.091852 systemd[1]: Reload requested from client PID 1568 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:17:25.092029 systemd[1]: Reloading... Dec 16 13:17:25.245401 zram_generator::config[1621]: No configuration found. Dec 16 13:17:25.637762 systemd[1]: Reloading finished in 545 ms. Dec 16 13:17:25.666660 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:17:25.668714 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:17:25.692657 systemd[1]: Starting ensure-sysext.service... Dec 16 13:17:25.697505 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:17:25.706053 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:17:25.724657 systemd[1]: Reload requested from client PID 1672 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:17:25.724677 systemd[1]: Reloading... Dec 16 13:17:25.743802 systemd-tmpfiles[1673]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:17:25.744710 systemd-tmpfiles[1673]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:17:25.745101 systemd-tmpfiles[1673]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:17:25.745526 systemd-tmpfiles[1673]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 13:17:25.749910 systemd-tmpfiles[1673]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 13:17:25.750380 systemd-tmpfiles[1673]: ACLs are not supported, ignoring. Dec 16 13:17:25.750462 systemd-tmpfiles[1673]: ACLs are not supported, ignoring. Dec 16 13:17:25.763896 systemd-tmpfiles[1673]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:17:25.763916 systemd-tmpfiles[1673]: Skipping /boot Dec 16 13:17:25.783881 systemd-tmpfiles[1673]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:17:25.783899 systemd-tmpfiles[1673]: Skipping /boot Dec 16 13:17:25.794683 systemd-udevd[1674]: Using default interface naming scheme 'v255'. Dec 16 13:17:25.848263 zram_generator::config[1701]: No configuration found. Dec 16 13:17:26.022397 ldconfig[1560]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:17:26.044527 (udev-worker)[1729]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:17:26.128336 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:17:26.236342 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 13:17:26.264360 kernel: ACPI: button: Power Button [PWRF] Dec 16 13:17:26.270459 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Dec 16 13:17:26.303345 kernel: ACPI: button: Sleep Button [SLPF] Dec 16 13:17:26.309756 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 16 13:17:26.385702 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:17:26.385785 systemd[1]: Reloading finished in 660 ms. Dec 16 13:17:26.396077 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:17:26.399394 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:17:26.400864 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:17:26.445725 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:17:26.449630 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:17:26.453120 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:17:26.458390 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:17:26.472815 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:17:26.479711 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:17:26.486375 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:17:26.486686 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:17:26.491433 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:17:26.495080 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:17:26.501528 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:17:26.503376 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:17:26.503585 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:17:26.503738 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:17:26.510493 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:17:26.515847 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:17:26.516286 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:17:26.516769 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:17:26.517095 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:17:26.517564 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:17:26.533154 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:17:26.533736 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:17:26.551779 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:17:26.552656 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:17:26.552917 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:17:26.553781 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:17:26.555275 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:17:26.562091 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:17:26.565581 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:17:26.584408 systemd[1]: Finished ensure-sysext.service. Dec 16 13:17:26.596912 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:17:26.624742 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:17:26.625003 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:17:26.626573 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:17:26.633604 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:17:26.636552 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:17:26.642943 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:17:26.645197 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:17:26.646261 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:17:26.652042 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:17:26.652346 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:17:26.675534 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:17:26.690480 augenrules[1918]: No rules Dec 16 13:17:26.691434 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:17:26.691832 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:17:26.700767 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:17:26.710841 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:17:26.762495 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 13:17:26.767587 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:17:26.792304 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:17:26.806157 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:17:26.806486 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:17:26.810671 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:17:26.831014 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:17:26.845418 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:17:26.971147 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:17:27.035473 systemd-networkd[1860]: lo: Link UP Dec 16 13:17:27.035840 systemd-networkd[1860]: lo: Gained carrier Dec 16 13:17:27.037765 systemd-networkd[1860]: Enumeration completed Dec 16 13:17:27.038024 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:17:27.038775 systemd-networkd[1860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:17:27.040476 systemd-networkd[1860]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:17:27.042585 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:17:27.045639 systemd-networkd[1860]: eth0: Link UP Dec 16 13:17:27.045881 systemd-networkd[1860]: eth0: Gained carrier Dec 16 13:17:27.045916 systemd-networkd[1860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:17:27.046565 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:17:27.054751 systemd-networkd[1860]: eth0: DHCPv4 address 172.31.26.5/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 13:17:27.069648 systemd-resolved[1861]: Positive Trust Anchors: Dec 16 13:17:27.069672 systemd-resolved[1861]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:17:27.069720 systemd-resolved[1861]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:17:27.075235 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:17:27.077196 systemd-resolved[1861]: Defaulting to hostname 'linux'. Dec 16 13:17:27.078882 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:17:27.079497 systemd[1]: Reached target network.target - Network. Dec 16 13:17:27.079938 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:17:27.080375 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:17:27.080854 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:17:27.081262 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:17:27.081664 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:17:27.082191 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:17:27.082676 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:17:27.083060 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:17:27.083548 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:17:27.083597 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:17:27.084011 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:17:27.085994 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:17:27.088111 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:17:27.090809 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:17:27.091482 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:17:27.091935 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:17:27.095628 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:17:27.097065 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:17:27.098270 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:17:27.099744 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:17:27.100182 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:17:27.100683 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:17:27.100722 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:17:27.101800 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:17:27.106502 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 13:17:27.115778 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:17:27.121120 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:17:27.126277 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:17:27.129996 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:17:27.130650 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:17:27.142297 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:17:27.147742 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:17:27.173784 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 13:17:27.177451 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:17:27.183035 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 16 13:17:27.192997 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:17:27.195638 jq[1957]: false Dec 16 13:17:27.197700 google_oslogin_nss_cache[1959]: oslogin_cache_refresh[1959]: Refreshing passwd entry cache Dec 16 13:17:27.197055 oslogin_cache_refresh[1959]: Refreshing passwd entry cache Dec 16 13:17:27.214525 extend-filesystems[1958]: Found /dev/nvme0n1p6 Dec 16 13:17:27.204617 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:17:27.218831 google_oslogin_nss_cache[1959]: oslogin_cache_refresh[1959]: Failure getting users, quitting Dec 16 13:17:27.218831 google_oslogin_nss_cache[1959]: oslogin_cache_refresh[1959]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:17:27.218831 google_oslogin_nss_cache[1959]: oslogin_cache_refresh[1959]: Refreshing group entry cache Dec 16 13:17:27.218226 oslogin_cache_refresh[1959]: Failure getting users, quitting Dec 16 13:17:27.218249 oslogin_cache_refresh[1959]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:17:27.218298 oslogin_cache_refresh[1959]: Refreshing group entry cache Dec 16 13:17:27.219610 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:17:27.221977 google_oslogin_nss_cache[1959]: oslogin_cache_refresh[1959]: Failure getting groups, quitting Dec 16 13:17:27.221977 google_oslogin_nss_cache[1959]: oslogin_cache_refresh[1959]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:17:27.220851 oslogin_cache_refresh[1959]: Failure getting groups, quitting Dec 16 13:17:27.220868 oslogin_cache_refresh[1959]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:17:27.222517 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:17:27.223277 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:17:27.223569 extend-filesystems[1958]: Found /dev/nvme0n1p9 Dec 16 13:17:27.228468 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:17:27.238509 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:17:27.240489 extend-filesystems[1958]: Checking size of /dev/nvme0n1p9 Dec 16 13:17:27.255535 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:17:27.256642 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:17:27.256934 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:17:27.257639 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:17:27.257999 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:17:27.266518 jq[1976]: true Dec 16 13:17:27.275637 extend-filesystems[1958]: Resized partition /dev/nvme0n1p9 Dec 16 13:17:27.279781 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:17:27.280092 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:17:27.294830 extend-filesystems[1985]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 13:17:27.308727 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Dec 16 13:17:27.343185 ntpd[1961]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:17:27.343260 ntpd[1961]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:17:27.343637 ntpd[1961]: 16 Dec 13:17:27 ntpd[1961]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:17:27.343637 ntpd[1961]: 16 Dec 13:17:27 ntpd[1961]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:17:27.343637 ntpd[1961]: 16 Dec 13:17:27 ntpd[1961]: ---------------------------------------------------- Dec 16 13:17:27.343637 ntpd[1961]: 16 Dec 13:17:27 ntpd[1961]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:17:27.343637 ntpd[1961]: 16 Dec 13:17:27 ntpd[1961]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:17:27.343637 ntpd[1961]: 16 Dec 13:17:27 ntpd[1961]: corporation. Support and training for ntp-4 are Dec 16 13:17:27.343271 ntpd[1961]: ---------------------------------------------------- Dec 16 13:17:27.343281 ntpd[1961]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:17:27.343290 ntpd[1961]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:17:27.343300 ntpd[1961]: corporation. Support and training for ntp-4 are Dec 16 13:17:27.343310 ntpd[1961]: available at https://www.nwtime.org/support Dec 16 13:17:27.345969 ntpd[1961]: ---------------------------------------------------- Dec 16 13:17:27.349506 ntpd[1961]: 16 Dec 13:17:27 ntpd[1961]: available at https://www.nwtime.org/support Dec 16 13:17:27.349506 ntpd[1961]: 16 Dec 13:17:27 ntpd[1961]: ---------------------------------------------------- Dec 16 13:17:27.350734 ntpd[1961]: proto: precision = 0.086 usec (-23) Dec 16 13:17:27.352151 ntpd[1961]: 16 Dec 13:17:27 ntpd[1961]: proto: precision = 0.086 usec (-23) Dec 16 13:17:27.355630 (ntainerd)[1997]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 13:17:27.357613 ntpd[1961]: 16 Dec 13:17:27 ntpd[1961]: basedate set to 2025-11-30 Dec 16 13:17:27.357613 ntpd[1961]: 16 Dec 13:17:27 ntpd[1961]: gps base set to 2025-11-30 (week 2395) Dec 16 13:17:27.357613 ntpd[1961]: 16 Dec 13:17:27 ntpd[1961]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:17:27.357613 ntpd[1961]: 16 Dec 13:17:27 ntpd[1961]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:17:27.357613 ntpd[1961]: 16 Dec 13:17:27 ntpd[1961]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:17:27.357613 ntpd[1961]: 16 Dec 13:17:27 ntpd[1961]: Listen normally on 3 eth0 172.31.26.5:123 Dec 16 13:17:27.357613 ntpd[1961]: 16 Dec 13:17:27 ntpd[1961]: Listen normally on 4 lo [::1]:123 Dec 16 13:17:27.357613 ntpd[1961]: 16 Dec 13:17:27 ntpd[1961]: bind(21) AF_INET6 [fe80::463:25ff:fe1c:99eb%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 13:17:27.357613 ntpd[1961]: 16 Dec 13:17:27 ntpd[1961]: unable to create socket on eth0 (5) for [fe80::463:25ff:fe1c:99eb%2]:123 Dec 16 13:17:27.360338 kernel: ntpd[1961]: segfault at 24 ip 0000556316c96aeb sp 00007fffe7953f60 error 4 in ntpd[68aeb,556316c34000+80000] likely on CPU 1 (core 0, socket 0) Dec 16 13:17:27.360375 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Dec 16 13:17:27.356634 ntpd[1961]: basedate set to 2025-11-30 Dec 16 13:17:27.370798 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:17:27.356656 ntpd[1961]: gps base set to 2025-11-30 (week 2395) Dec 16 13:17:27.371113 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:17:27.356798 ntpd[1961]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:17:27.373490 systemd-coredump[2010]: Process 1961 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Dec 16 13:17:27.356829 ntpd[1961]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:17:27.380878 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Dec 16 13:17:27.357049 ntpd[1961]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:17:27.357085 ntpd[1961]: Listen normally on 3 eth0 172.31.26.5:123 Dec 16 13:17:27.357113 ntpd[1961]: Listen normally on 4 lo [::1]:123 Dec 16 13:17:27.357142 ntpd[1961]: bind(21) AF_INET6 [fe80::463:25ff:fe1c:99eb%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 13:17:27.357162 ntpd[1961]: unable to create socket on eth0 (5) for [fe80::463:25ff:fe1c:99eb%2]:123 Dec 16 13:17:27.398485 jq[1987]: true Dec 16 13:17:27.388686 systemd[1]: Started systemd-coredump@0-2010-0.service - Process Core Dump (PID 2010/UID 0). Dec 16 13:17:27.405404 update_engine[1975]: I20251216 13:17:27.401947 1975 main.cc:92] Flatcar Update Engine starting Dec 16 13:17:27.405738 tar[1981]: linux-amd64/LICENSE Dec 16 13:17:27.405738 tar[1981]: linux-amd64/helm Dec 16 13:17:27.435411 coreos-metadata[1954]: Dec 16 13:17:27.435 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 13:17:27.442943 coreos-metadata[1954]: Dec 16 13:17:27.437 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 16 13:17:27.442943 coreos-metadata[1954]: Dec 16 13:17:27.438 INFO Fetch successful Dec 16 13:17:27.442943 coreos-metadata[1954]: Dec 16 13:17:27.438 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 16 13:17:27.442943 coreos-metadata[1954]: Dec 16 13:17:27.438 INFO Fetch successful Dec 16 13:17:27.442943 coreos-metadata[1954]: Dec 16 13:17:27.438 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 16 13:17:27.442943 coreos-metadata[1954]: Dec 16 13:17:27.439 INFO Fetch successful Dec 16 13:17:27.442943 coreos-metadata[1954]: Dec 16 13:17:27.439 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 16 13:17:27.445481 coreos-metadata[1954]: Dec 16 13:17:27.445 INFO Fetch successful Dec 16 13:17:27.445481 coreos-metadata[1954]: Dec 16 13:17:27.445 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 16 13:17:27.445481 coreos-metadata[1954]: Dec 16 13:17:27.445 INFO Fetch failed with 404: resource not found Dec 16 13:17:27.445481 coreos-metadata[1954]: Dec 16 13:17:27.445 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 16 13:17:27.445481 coreos-metadata[1954]: Dec 16 13:17:27.445 INFO Fetch successful Dec 16 13:17:27.445481 coreos-metadata[1954]: Dec 16 13:17:27.445 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 16 13:17:27.449035 coreos-metadata[1954]: Dec 16 13:17:27.448 INFO Fetch successful Dec 16 13:17:27.449035 coreos-metadata[1954]: Dec 16 13:17:27.448 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 16 13:17:27.449035 coreos-metadata[1954]: Dec 16 13:17:27.448 INFO Fetch successful Dec 16 13:17:27.449035 coreos-metadata[1954]: Dec 16 13:17:27.448 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 16 13:17:27.449035 coreos-metadata[1954]: Dec 16 13:17:27.448 INFO Fetch successful Dec 16 13:17:27.449035 coreos-metadata[1954]: Dec 16 13:17:27.448 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 16 13:17:27.449035 coreos-metadata[1954]: Dec 16 13:17:27.448 INFO Fetch successful Dec 16 13:17:27.466132 dbus-daemon[1955]: [system] SELinux support is enabled Dec 16 13:17:27.467622 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:17:27.475628 dbus-daemon[1955]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1860 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 13:17:27.478116 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:17:27.478167 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:17:27.479205 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:17:27.479238 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:17:27.490177 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Dec 16 13:17:27.484995 dbus-daemon[1955]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 13:17:27.506148 update_engine[1975]: I20251216 13:17:27.496630 1975 update_check_scheduler.cc:74] Next update check in 2m11s Dec 16 13:17:27.508649 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 13:17:27.510505 extend-filesystems[1985]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 16 13:17:27.510505 extend-filesystems[1985]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 16 13:17:27.510505 extend-filesystems[1985]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Dec 16 13:17:27.518989 extend-filesystems[1958]: Resized filesystem in /dev/nvme0n1p9 Dec 16 13:17:27.511259 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 16 13:17:27.515873 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:17:27.517072 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:17:27.524178 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:17:27.593230 systemd-logind[1972]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 13:17:27.593269 systemd-logind[1972]: Watching system buttons on /dev/input/event3 (Sleep Button) Dec 16 13:17:27.593292 systemd-logind[1972]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 13:17:27.600018 systemd-logind[1972]: New seat seat0. Dec 16 13:17:27.607076 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:17:27.622496 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:17:27.631157 bash[2043]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:17:27.641419 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:17:27.695482 systemd[1]: Starting sshkeys.service... Dec 16 13:17:27.752411 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 13:17:27.755294 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:17:27.772994 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 13:17:27.785430 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 13:17:27.901688 systemd-coredump[2012]: Process 1961 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1961: #0 0x0000556316c96aeb n/a (ntpd + 0x68aeb) #1 0x0000556316c3fcdf n/a (ntpd + 0x11cdf) #2 0x0000556316c40575 n/a (ntpd + 0x12575) #3 0x0000556316c3bd8a n/a (ntpd + 0xdd8a) #4 0x0000556316c3d5d3 n/a (ntpd + 0xf5d3) #5 0x0000556316c45fd1 n/a (ntpd + 0x17fd1) #6 0x0000556316c36c2d n/a (ntpd + 0x8c2d) #7 0x00007f8511b4316c n/a (libc.so.6 + 0x2716c) #8 0x00007f8511b43229 __libc_start_main (libc.so.6 + 0x27229) #9 0x0000556316c36c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Dec 16 13:17:27.938554 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Dec 16 13:17:27.939131 systemd[1]: ntpd.service: Failed with result 'core-dump'. Dec 16 13:17:27.953895 systemd[1]: systemd-coredump@0-2010-0.service: Deactivated successfully. Dec 16 13:17:28.056112 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Dec 16 13:17:28.069714 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 13:17:28.072041 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 13:17:28.089523 dbus-daemon[1955]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 13:17:28.105539 ntpd[2149]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:17:28.105998 ntpd[2149]: 16 Dec 13:17:28 ntpd[2149]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:17:28.105998 ntpd[2149]: 16 Dec 13:17:28 ntpd[2149]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:17:28.105998 ntpd[2149]: 16 Dec 13:17:28 ntpd[2149]: ---------------------------------------------------- Dec 16 13:17:28.105998 ntpd[2149]: 16 Dec 13:17:28 ntpd[2149]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:17:28.105998 ntpd[2149]: 16 Dec 13:17:28 ntpd[2149]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:17:28.105998 ntpd[2149]: 16 Dec 13:17:28 ntpd[2149]: corporation. Support and training for ntp-4 are Dec 16 13:17:28.105998 ntpd[2149]: 16 Dec 13:17:28 ntpd[2149]: available at https://www.nwtime.org/support Dec 16 13:17:28.105998 ntpd[2149]: 16 Dec 13:17:28 ntpd[2149]: ---------------------------------------------------- Dec 16 13:17:28.105607 ntpd[2149]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:17:28.106692 ntpd[2149]: 16 Dec 13:17:28 ntpd[2149]: proto: precision = 0.084 usec (-23) Dec 16 13:17:28.105620 ntpd[2149]: ---------------------------------------------------- Dec 16 13:17:28.106786 ntpd[2149]: 16 Dec 13:17:28 ntpd[2149]: basedate set to 2025-11-30 Dec 16 13:17:28.106786 ntpd[2149]: 16 Dec 13:17:28 ntpd[2149]: gps base set to 2025-11-30 (week 2395) Dec 16 13:17:28.105629 ntpd[2149]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:17:28.106907 ntpd[2149]: 16 Dec 13:17:28 ntpd[2149]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:17:28.106907 ntpd[2149]: 16 Dec 13:17:28 ntpd[2149]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:17:28.105640 ntpd[2149]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:17:28.105649 ntpd[2149]: corporation. Support and training for ntp-4 are Dec 16 13:17:28.107101 ntpd[2149]: 16 Dec 13:17:28 ntpd[2149]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:17:28.107101 ntpd[2149]: 16 Dec 13:17:28 ntpd[2149]: Listen normally on 3 eth0 172.31.26.5:123 Dec 16 13:17:28.107101 ntpd[2149]: 16 Dec 13:17:28 ntpd[2149]: Listen normally on 4 lo [::1]:123 Dec 16 13:17:28.105658 ntpd[2149]: available at https://www.nwtime.org/support Dec 16 13:17:28.107266 ntpd[2149]: 16 Dec 13:17:28 ntpd[2149]: bind(21) AF_INET6 [fe80::463:25ff:fe1c:99eb%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 13:17:28.107266 ntpd[2149]: 16 Dec 13:17:28 ntpd[2149]: unable to create socket on eth0 (5) for [fe80::463:25ff:fe1c:99eb%2]:123 Dec 16 13:17:28.119557 kernel: ntpd[2149]: segfault at 24 ip 0000564f59452aeb sp 00007ffe69f41ec0 error 4 in ntpd[68aeb,564f593f0000+80000] likely on CPU 0 (core 0, socket 0) Dec 16 13:17:28.119627 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Dec 16 13:17:28.105666 ntpd[2149]: ---------------------------------------------------- Dec 16 13:17:28.106425 ntpd[2149]: proto: precision = 0.084 usec (-23) Dec 16 13:17:28.106700 ntpd[2149]: basedate set to 2025-11-30 Dec 16 13:17:28.106715 ntpd[2149]: gps base set to 2025-11-30 (week 2395) Dec 16 13:17:28.106805 ntpd[2149]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:17:28.106833 ntpd[2149]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:17:28.107034 ntpd[2149]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:17:28.107061 ntpd[2149]: Listen normally on 3 eth0 172.31.26.5:123 Dec 16 13:17:28.107090 ntpd[2149]: Listen normally on 4 lo [::1]:123 Dec 16 13:17:28.107121 ntpd[2149]: bind(21) AF_INET6 [fe80::463:25ff:fe1c:99eb%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 13:17:28.107142 ntpd[2149]: unable to create socket on eth0 (5) for [fe80::463:25ff:fe1c:99eb%2]:123 Dec 16 13:17:28.111152 dbus-daemon[1955]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2032 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 13:17:28.124536 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 13:17:28.129809 systemd-coredump[2154]: Process 2149 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Dec 16 13:17:28.150630 coreos-metadata[2075]: Dec 16 13:17:28.144 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 13:17:28.161786 coreos-metadata[2075]: Dec 16 13:17:28.161 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 16 13:17:28.161848 systemd[1]: Started systemd-coredump@1-2154-0.service - Process Core Dump (PID 2154/UID 0). Dec 16 13:17:28.164652 coreos-metadata[2075]: Dec 16 13:17:28.164 INFO Fetch successful Dec 16 13:17:28.164652 coreos-metadata[2075]: Dec 16 13:17:28.164 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 16 13:17:28.167341 coreos-metadata[2075]: Dec 16 13:17:28.166 INFO Fetch successful Dec 16 13:17:28.169275 unknown[2075]: wrote ssh authorized keys file for user: core Dec 16 13:17:28.264468 locksmithd[2036]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:17:28.273727 update-ssh-keys[2162]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:17:28.275000 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 13:17:28.278396 systemd[1]: Finished sshkeys.service. Dec 16 13:17:28.501662 systemd-coredump[2159]: Process 2149 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 2149: #0 0x0000564f59452aeb n/a (ntpd + 0x68aeb) #1 0x0000564f593fbcdf n/a (ntpd + 0x11cdf) #2 0x0000564f593fc575 n/a (ntpd + 0x12575) #3 0x0000564f593f7d8a n/a (ntpd + 0xdd8a) #4 0x0000564f593f95d3 n/a (ntpd + 0xf5d3) #5 0x0000564f59401fd1 n/a (ntpd + 0x17fd1) #6 0x0000564f593f2c2d n/a (ntpd + 0x8c2d) #7 0x00007f461e5e216c n/a (libc.so.6 + 0x2716c) #8 0x00007f461e5e2229 __libc_start_main (libc.so.6 + 0x27229) #9 0x0000564f593f2c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Dec 16 13:17:28.504040 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Dec 16 13:17:28.504235 systemd[1]: ntpd.service: Failed with result 'core-dump'. Dec 16 13:17:28.520869 systemd[1]: systemd-coredump@1-2154-0.service: Deactivated successfully. Dec 16 13:17:28.544342 containerd[1997]: time="2025-12-16T13:17:28Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:17:28.545811 containerd[1997]: time="2025-12-16T13:17:28.545764585Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 13:17:28.593395 containerd[1997]: time="2025-12-16T13:17:28.589593066Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.461µs" Dec 16 13:17:28.593515 containerd[1997]: time="2025-12-16T13:17:28.593440202Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:17:28.593515 containerd[1997]: time="2025-12-16T13:17:28.593499017Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:17:28.593715 containerd[1997]: time="2025-12-16T13:17:28.593691464Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:17:28.593770 containerd[1997]: time="2025-12-16T13:17:28.593722008Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:17:28.593770 containerd[1997]: time="2025-12-16T13:17:28.593757009Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:17:28.595364 containerd[1997]: time="2025-12-16T13:17:28.593833356Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:17:28.595364 containerd[1997]: time="2025-12-16T13:17:28.593852746Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:17:28.595364 containerd[1997]: time="2025-12-16T13:17:28.594192289Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:17:28.595364 containerd[1997]: time="2025-12-16T13:17:28.594214490Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:17:28.595364 containerd[1997]: time="2025-12-16T13:17:28.594230491Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:17:28.595364 containerd[1997]: time="2025-12-16T13:17:28.594242230Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:17:28.595364 containerd[1997]: time="2025-12-16T13:17:28.595046364Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:17:28.595794 containerd[1997]: time="2025-12-16T13:17:28.595765077Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:17:28.595846 containerd[1997]: time="2025-12-16T13:17:28.595822359Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:17:28.595889 containerd[1997]: time="2025-12-16T13:17:28.595850388Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:17:28.595935 containerd[1997]: time="2025-12-16T13:17:28.595917892Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:17:28.596848 containerd[1997]: time="2025-12-16T13:17:28.596821343Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:17:28.596930 containerd[1997]: time="2025-12-16T13:17:28.596911837Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:17:28.603596 containerd[1997]: time="2025-12-16T13:17:28.603512304Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:17:28.603848 containerd[1997]: time="2025-12-16T13:17:28.603810412Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:17:28.604336 containerd[1997]: time="2025-12-16T13:17:28.604047690Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:17:28.604336 containerd[1997]: time="2025-12-16T13:17:28.604078255Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:17:28.604336 containerd[1997]: time="2025-12-16T13:17:28.604107818Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:17:28.604336 containerd[1997]: time="2025-12-16T13:17:28.604125325Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:17:28.604336 containerd[1997]: time="2025-12-16T13:17:28.604146542Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:17:28.604336 containerd[1997]: time="2025-12-16T13:17:28.604179542Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:17:28.604336 containerd[1997]: time="2025-12-16T13:17:28.604197755Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:17:28.604336 containerd[1997]: time="2025-12-16T13:17:28.604214046Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:17:28.604336 containerd[1997]: time="2025-12-16T13:17:28.604229045Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:17:28.604336 containerd[1997]: time="2025-12-16T13:17:28.604250112Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:17:28.605002 containerd[1997]: time="2025-12-16T13:17:28.604971813Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:17:28.605391 containerd[1997]: time="2025-12-16T13:17:28.605365877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:17:28.606719 containerd[1997]: time="2025-12-16T13:17:28.606358846Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:17:28.606719 containerd[1997]: time="2025-12-16T13:17:28.606398380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:17:28.606719 containerd[1997]: time="2025-12-16T13:17:28.606419648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:17:28.606719 containerd[1997]: time="2025-12-16T13:17:28.606447603Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:17:28.606719 containerd[1997]: time="2025-12-16T13:17:28.606464986Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:17:28.606719 containerd[1997]: time="2025-12-16T13:17:28.606482002Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:17:28.606719 containerd[1997]: time="2025-12-16T13:17:28.606500850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:17:28.606719 containerd[1997]: time="2025-12-16T13:17:28.606520074Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:17:28.606719 containerd[1997]: time="2025-12-16T13:17:28.606538314Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:17:28.606719 containerd[1997]: time="2025-12-16T13:17:28.606599551Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:17:28.606719 containerd[1997]: time="2025-12-16T13:17:28.606617316Z" level=info msg="Start snapshots syncer" Dec 16 13:17:28.606719 containerd[1997]: time="2025-12-16T13:17:28.606669919Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:17:28.610073 containerd[1997]: time="2025-12-16T13:17:28.609446697Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:17:28.610073 containerd[1997]: time="2025-12-16T13:17:28.609532089Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:17:28.610292 containerd[1997]: time="2025-12-16T13:17:28.609624399Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:17:28.610292 containerd[1997]: time="2025-12-16T13:17:28.609829619Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:17:28.610292 containerd[1997]: time="2025-12-16T13:17:28.609859467Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:17:28.610292 containerd[1997]: time="2025-12-16T13:17:28.609877274Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:17:28.610292 containerd[1997]: time="2025-12-16T13:17:28.609894144Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:17:28.610292 containerd[1997]: time="2025-12-16T13:17:28.609917987Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:17:28.610292 containerd[1997]: time="2025-12-16T13:17:28.609933955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:17:28.610292 containerd[1997]: time="2025-12-16T13:17:28.609954471Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:17:28.610292 containerd[1997]: time="2025-12-16T13:17:28.609994006Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:17:28.610292 containerd[1997]: time="2025-12-16T13:17:28.610010607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:17:28.610292 containerd[1997]: time="2025-12-16T13:17:28.610027628Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:17:28.611730 containerd[1997]: time="2025-12-16T13:17:28.610764208Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:17:28.611730 containerd[1997]: time="2025-12-16T13:17:28.611255083Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:17:28.611730 containerd[1997]: time="2025-12-16T13:17:28.611279800Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:17:28.611730 containerd[1997]: time="2025-12-16T13:17:28.611299413Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:17:28.611730 containerd[1997]: time="2025-12-16T13:17:28.611327133Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:17:28.611730 containerd[1997]: time="2025-12-16T13:17:28.611344154Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:17:28.611730 containerd[1997]: time="2025-12-16T13:17:28.611369907Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:17:28.611730 containerd[1997]: time="2025-12-16T13:17:28.611393706Z" level=info msg="runtime interface created" Dec 16 13:17:28.611730 containerd[1997]: time="2025-12-16T13:17:28.611402474Z" level=info msg="created NRI interface" Dec 16 13:17:28.611730 containerd[1997]: time="2025-12-16T13:17:28.611415600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:17:28.611730 containerd[1997]: time="2025-12-16T13:17:28.611439255Z" level=info msg="Connect containerd service" Dec 16 13:17:28.611730 containerd[1997]: time="2025-12-16T13:17:28.611480663Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:17:28.613327 polkitd[2153]: Started polkitd version 126 Dec 16 13:17:28.617407 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 2. Dec 16 13:17:28.617759 containerd[1997]: time="2025-12-16T13:17:28.617109319Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:17:28.620176 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 13:17:28.638170 polkitd[2153]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 13:17:28.640266 polkitd[2153]: Loading rules from directory /run/polkit-1/rules.d Dec 16 13:17:28.641346 polkitd[2153]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 13:17:28.642348 polkitd[2153]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 13:17:28.644779 polkitd[2153]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 13:17:28.644836 polkitd[2153]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 13:17:28.650404 polkitd[2153]: Finished loading, compiling and executing 2 rules Dec 16 13:17:28.650752 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 13:17:28.680739 dbus-daemon[1955]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 13:17:28.682104 polkitd[2153]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 13:17:28.713045 ntpd[2184]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:17:28.715079 ntpd[2184]: 16 Dec 13:17:28 ntpd[2184]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:17:28.715079 ntpd[2184]: 16 Dec 13:17:28 ntpd[2184]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:17:28.715079 ntpd[2184]: 16 Dec 13:17:28 ntpd[2184]: ---------------------------------------------------- Dec 16 13:17:28.715079 ntpd[2184]: 16 Dec 13:17:28 ntpd[2184]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:17:28.715079 ntpd[2184]: 16 Dec 13:17:28 ntpd[2184]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:17:28.715079 ntpd[2184]: 16 Dec 13:17:28 ntpd[2184]: corporation. Support and training for ntp-4 are Dec 16 13:17:28.715079 ntpd[2184]: 16 Dec 13:17:28 ntpd[2184]: available at https://www.nwtime.org/support Dec 16 13:17:28.715079 ntpd[2184]: 16 Dec 13:17:28 ntpd[2184]: ---------------------------------------------------- Dec 16 13:17:28.715079 ntpd[2184]: 16 Dec 13:17:28 ntpd[2184]: proto: precision = 0.102 usec (-23) Dec 16 13:17:28.715079 ntpd[2184]: 16 Dec 13:17:28 ntpd[2184]: basedate set to 2025-11-30 Dec 16 13:17:28.715079 ntpd[2184]: 16 Dec 13:17:28 ntpd[2184]: gps base set to 2025-11-30 (week 2395) Dec 16 13:17:28.715079 ntpd[2184]: 16 Dec 13:17:28 ntpd[2184]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:17:28.713127 ntpd[2184]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:17:28.715535 systemd-networkd[1860]: eth0: Gained IPv6LL Dec 16 13:17:28.713138 ntpd[2184]: ---------------------------------------------------- Dec 16 13:17:28.713147 ntpd[2184]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:17:28.713157 ntpd[2184]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:17:28.713166 ntpd[2184]: corporation. Support and training for ntp-4 are Dec 16 13:17:28.713175 ntpd[2184]: available at https://www.nwtime.org/support Dec 16 13:17:28.713184 ntpd[2184]: ---------------------------------------------------- Dec 16 13:17:28.713942 ntpd[2184]: proto: precision = 0.102 usec (-23) Dec 16 13:17:28.714195 ntpd[2184]: basedate set to 2025-11-30 Dec 16 13:17:28.714207 ntpd[2184]: gps base set to 2025-11-30 (week 2395) Dec 16 13:17:28.714288 ntpd[2184]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:17:28.724686 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:17:28.726795 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:17:28.733159 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 16 13:17:28.737549 ntpd[2184]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:17:28.740992 ntpd[2184]: 16 Dec 13:17:28 ntpd[2184]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:17:28.740992 ntpd[2184]: 16 Dec 13:17:28 ntpd[2184]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:17:28.740992 ntpd[2184]: 16 Dec 13:17:28 ntpd[2184]: Listen normally on 3 eth0 172.31.26.5:123 Dec 16 13:17:28.740992 ntpd[2184]: 16 Dec 13:17:28 ntpd[2184]: Listen normally on 4 lo [::1]:123 Dec 16 13:17:28.740992 ntpd[2184]: 16 Dec 13:17:28 ntpd[2184]: Listen normally on 5 eth0 [fe80::463:25ff:fe1c:99eb%2]:123 Dec 16 13:17:28.740992 ntpd[2184]: 16 Dec 13:17:28 ntpd[2184]: Listening on routing socket on fd #22 for interface updates Dec 16 13:17:28.737781 ntpd[2184]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:17:28.737813 ntpd[2184]: Listen normally on 3 eth0 172.31.26.5:123 Dec 16 13:17:28.737844 ntpd[2184]: Listen normally on 4 lo [::1]:123 Dec 16 13:17:28.737871 ntpd[2184]: Listen normally on 5 eth0 [fe80::463:25ff:fe1c:99eb%2]:123 Dec 16 13:17:28.737896 ntpd[2184]: Listening on routing socket on fd #22 for interface updates Dec 16 13:17:28.742095 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:17:28.750731 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:17:28.760070 ntpd[2184]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:17:28.761151 ntpd[2184]: 16 Dec 13:17:28 ntpd[2184]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:17:28.761151 ntpd[2184]: 16 Dec 13:17:28 ntpd[2184]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:17:28.760109 ntpd[2184]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:17:28.775351 systemd-hostnamed[2032]: Hostname set to (transient) Dec 16 13:17:28.775544 systemd-resolved[1861]: System hostname changed to 'ip-172-31-26-5'. Dec 16 13:17:28.879300 tar[1981]: linux-amd64/README.md Dec 16 13:17:28.883442 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:17:28.941287 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:17:28.959365 amazon-ssm-agent[2196]: Initializing new seelog logger Dec 16 13:17:28.961823 amazon-ssm-agent[2196]: New Seelog Logger Creation Complete Dec 16 13:17:28.962057 amazon-ssm-agent[2196]: 2025/12/16 13:17:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:17:28.962213 amazon-ssm-agent[2196]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:17:28.963480 amazon-ssm-agent[2196]: 2025/12/16 13:17:28 processing appconfig overrides Dec 16 13:17:28.969338 amazon-ssm-agent[2196]: 2025/12/16 13:17:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:17:28.969338 amazon-ssm-agent[2196]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:17:28.969338 amazon-ssm-agent[2196]: 2025/12/16 13:17:28 processing appconfig overrides Dec 16 13:17:28.969672 amazon-ssm-agent[2196]: 2025/12/16 13:17:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:17:28.969672 amazon-ssm-agent[2196]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:17:28.969777 amazon-ssm-agent[2196]: 2025/12/16 13:17:28 processing appconfig overrides Dec 16 13:17:28.970282 amazon-ssm-agent[2196]: 2025-12-16 13:17:28.9664 INFO Proxy environment variables: Dec 16 13:17:28.978647 amazon-ssm-agent[2196]: 2025/12/16 13:17:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:17:28.978776 amazon-ssm-agent[2196]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:17:28.978983 amazon-ssm-agent[2196]: 2025/12/16 13:17:28 processing appconfig overrides Dec 16 13:17:28.982566 containerd[1997]: time="2025-12-16T13:17:28.981846677Z" level=info msg="Start subscribing containerd event" Dec 16 13:17:28.982566 containerd[1997]: time="2025-12-16T13:17:28.981915253Z" level=info msg="Start recovering state" Dec 16 13:17:28.982566 containerd[1997]: time="2025-12-16T13:17:28.982036344Z" level=info msg="Start event monitor" Dec 16 13:17:28.982566 containerd[1997]: time="2025-12-16T13:17:28.982050860Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:17:28.982566 containerd[1997]: time="2025-12-16T13:17:28.982062032Z" level=info msg="Start streaming server" Dec 16 13:17:28.982566 containerd[1997]: time="2025-12-16T13:17:28.982074642Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:17:28.982566 containerd[1997]: time="2025-12-16T13:17:28.982085994Z" level=info msg="runtime interface starting up..." Dec 16 13:17:28.982566 containerd[1997]: time="2025-12-16T13:17:28.982095755Z" level=info msg="starting plugins..." Dec 16 13:17:28.982566 containerd[1997]: time="2025-12-16T13:17:28.982112653Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:17:28.984654 containerd[1997]: time="2025-12-16T13:17:28.983264042Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:17:28.984654 containerd[1997]: time="2025-12-16T13:17:28.983367169Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:17:28.983586 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:17:28.985074 containerd[1997]: time="2025-12-16T13:17:28.984960032Z" level=info msg="containerd successfully booted in 0.443158s" Dec 16 13:17:29.074396 amazon-ssm-agent[2196]: 2025-12-16 13:17:28.9665 INFO https_proxy: Dec 16 13:17:29.171405 amazon-ssm-agent[2196]: 2025-12-16 13:17:28.9665 INFO http_proxy: Dec 16 13:17:29.270032 amazon-ssm-agent[2196]: 2025-12-16 13:17:28.9665 INFO no_proxy: Dec 16 13:17:29.350212 sshd_keygen[2009]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:17:29.370342 amazon-ssm-agent[2196]: 2025-12-16 13:17:28.9692 INFO Checking if agent identity type OnPrem can be assumed Dec 16 13:17:29.386223 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:17:29.389617 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:17:29.416415 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:17:29.416724 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:17:29.422656 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:17:29.454235 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:17:29.458838 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:17:29.465718 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:17:29.466652 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:17:29.472342 amazon-ssm-agent[2196]: 2025-12-16 13:17:28.9695 INFO Checking if agent identity type EC2 can be assumed Dec 16 13:17:29.570390 amazon-ssm-agent[2196]: 2025-12-16 13:17:29.0523 INFO Agent will take identity from EC2 Dec 16 13:17:29.669847 amazon-ssm-agent[2196]: 2025-12-16 13:17:29.0542 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Dec 16 13:17:29.737127 amazon-ssm-agent[2196]: 2025/12/16 13:17:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:17:29.738381 amazon-ssm-agent[2196]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:17:29.738588 amazon-ssm-agent[2196]: 2025/12/16 13:17:29 processing appconfig overrides Dec 16 13:17:29.769364 amazon-ssm-agent[2196]: 2025-12-16 13:17:29.0542 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Dec 16 13:17:29.782461 amazon-ssm-agent[2196]: 2025-12-16 13:17:29.0542 INFO [amazon-ssm-agent] Starting Core Agent Dec 16 13:17:29.782654 amazon-ssm-agent[2196]: 2025-12-16 13:17:29.0542 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Dec 16 13:17:29.782654 amazon-ssm-agent[2196]: 2025-12-16 13:17:29.0542 INFO [Registrar] Starting registrar module Dec 16 13:17:29.782654 amazon-ssm-agent[2196]: 2025-12-16 13:17:29.0587 INFO [EC2Identity] Checking disk for registration info Dec 16 13:17:29.782654 amazon-ssm-agent[2196]: 2025-12-16 13:17:29.0588 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Dec 16 13:17:29.782821 amazon-ssm-agent[2196]: 2025-12-16 13:17:29.0588 INFO [EC2Identity] Generating registration keypair Dec 16 13:17:29.782821 amazon-ssm-agent[2196]: 2025-12-16 13:17:29.6780 INFO [EC2Identity] Checking write access before registering Dec 16 13:17:29.782821 amazon-ssm-agent[2196]: 2025-12-16 13:17:29.6785 INFO [EC2Identity] Registering EC2 instance with Systems Manager Dec 16 13:17:29.782821 amazon-ssm-agent[2196]: 2025-12-16 13:17:29.7366 INFO [EC2Identity] EC2 registration was successful. Dec 16 13:17:29.782821 amazon-ssm-agent[2196]: 2025-12-16 13:17:29.7369 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Dec 16 13:17:29.782821 amazon-ssm-agent[2196]: 2025-12-16 13:17:29.7370 INFO [CredentialRefresher] credentialRefresher has started Dec 16 13:17:29.782821 amazon-ssm-agent[2196]: 2025-12-16 13:17:29.7370 INFO [CredentialRefresher] Starting credentials refresher loop Dec 16 13:17:29.782821 amazon-ssm-agent[2196]: 2025-12-16 13:17:29.7805 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 16 13:17:29.782821 amazon-ssm-agent[2196]: 2025-12-16 13:17:29.7823 INFO [CredentialRefresher] Credentials ready Dec 16 13:17:29.868028 amazon-ssm-agent[2196]: 2025-12-16 13:17:29.7827 INFO [CredentialRefresher] Next credential rotation will be in 29.999963312416668 minutes Dec 16 13:17:30.344375 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:17:30.345499 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:17:30.347638 systemd[1]: Startup finished in 2.645s (kernel) + 6.438s (initrd) + 7.330s (userspace) = 16.414s. Dec 16 13:17:30.351136 (kubelet)[2248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:17:30.688845 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:17:30.691540 systemd[1]: Started sshd@0-172.31.26.5:22-139.178.68.195:55320.service - OpenSSH per-connection server daemon (139.178.68.195:55320). Dec 16 13:17:30.799739 amazon-ssm-agent[2196]: 2025-12-16 13:17:30.7996 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 16 13:17:30.900983 amazon-ssm-agent[2196]: 2025-12-16 13:17:30.8037 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2263) started Dec 16 13:17:30.925282 sshd[2258]: Accepted publickey for core from 139.178.68.195 port 55320 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:17:30.931147 sshd-session[2258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:17:30.951016 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:17:30.953766 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:17:30.963130 systemd-logind[1972]: New session 1 of user core. Dec 16 13:17:30.995825 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:17:31.001661 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:17:31.003328 amazon-ssm-agent[2196]: 2025-12-16 13:17:30.8037 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 16 13:17:31.021407 (systemd)[2273]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:17:31.029054 systemd-logind[1972]: New session c1 of user core. Dec 16 13:17:31.217721 kubelet[2248]: E1216 13:17:31.217622 2248 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:17:31.221713 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:17:31.222211 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:17:31.223038 systemd[1]: kubelet.service: Consumed 1.118s CPU time, 264.4M memory peak. Dec 16 13:17:31.242534 systemd[2273]: Queued start job for default target default.target. Dec 16 13:17:31.249745 systemd[2273]: Created slice app.slice - User Application Slice. Dec 16 13:17:31.249790 systemd[2273]: Reached target paths.target - Paths. Dec 16 13:17:31.250100 systemd[2273]: Reached target timers.target - Timers. Dec 16 13:17:31.251924 systemd[2273]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:17:31.265250 systemd[2273]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:17:31.265415 systemd[2273]: Reached target sockets.target - Sockets. Dec 16 13:17:31.265481 systemd[2273]: Reached target basic.target - Basic System. Dec 16 13:17:31.265535 systemd[2273]: Reached target default.target - Main User Target. Dec 16 13:17:31.265578 systemd[2273]: Startup finished in 222ms. Dec 16 13:17:31.265696 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:17:31.273632 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:17:31.424071 systemd[1]: Started sshd@1-172.31.26.5:22-139.178.68.195:55324.service - OpenSSH per-connection server daemon (139.178.68.195:55324). Dec 16 13:17:31.594194 sshd[2290]: Accepted publickey for core from 139.178.68.195 port 55324 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:17:31.595908 sshd-session[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:17:31.601389 systemd-logind[1972]: New session 2 of user core. Dec 16 13:17:31.608587 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:17:31.730808 sshd[2293]: Connection closed by 139.178.68.195 port 55324 Dec 16 13:17:31.731551 sshd-session[2290]: pam_unix(sshd:session): session closed for user core Dec 16 13:17:31.735092 systemd[1]: sshd@1-172.31.26.5:22-139.178.68.195:55324.service: Deactivated successfully. Dec 16 13:17:31.737151 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 13:17:31.739201 systemd-logind[1972]: Session 2 logged out. Waiting for processes to exit. Dec 16 13:17:31.740550 systemd-logind[1972]: Removed session 2. Dec 16 13:17:31.767195 systemd[1]: Started sshd@2-172.31.26.5:22-139.178.68.195:55332.service - OpenSSH per-connection server daemon (139.178.68.195:55332). Dec 16 13:17:31.931821 sshd[2299]: Accepted publickey for core from 139.178.68.195 port 55332 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:17:31.933043 sshd-session[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:17:31.938748 systemd-logind[1972]: New session 3 of user core. Dec 16 13:17:31.945555 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:17:32.060512 sshd[2302]: Connection closed by 139.178.68.195 port 55332 Dec 16 13:17:32.061034 sshd-session[2299]: pam_unix(sshd:session): session closed for user core Dec 16 13:17:32.065875 systemd[1]: sshd@2-172.31.26.5:22-139.178.68.195:55332.service: Deactivated successfully. Dec 16 13:17:32.068083 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 13:17:32.069246 systemd-logind[1972]: Session 3 logged out. Waiting for processes to exit. Dec 16 13:17:32.071256 systemd-logind[1972]: Removed session 3. Dec 16 13:17:32.095902 systemd[1]: Started sshd@3-172.31.26.5:22-139.178.68.195:55340.service - OpenSSH per-connection server daemon (139.178.68.195:55340). Dec 16 13:17:32.267624 sshd[2308]: Accepted publickey for core from 139.178.68.195 port 55340 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:17:32.268840 sshd-session[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:17:32.273934 systemd-logind[1972]: New session 4 of user core. Dec 16 13:17:32.281558 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:17:32.403197 sshd[2311]: Connection closed by 139.178.68.195 port 55340 Dec 16 13:17:32.403755 sshd-session[2308]: pam_unix(sshd:session): session closed for user core Dec 16 13:17:32.408033 systemd[1]: sshd@3-172.31.26.5:22-139.178.68.195:55340.service: Deactivated successfully. Dec 16 13:17:32.410245 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:17:32.411480 systemd-logind[1972]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:17:32.413078 systemd-logind[1972]: Removed session 4. Dec 16 13:17:32.437382 systemd[1]: Started sshd@4-172.31.26.5:22-139.178.68.195:55354.service - OpenSSH per-connection server daemon (139.178.68.195:55354). Dec 16 13:17:32.615371 sshd[2317]: Accepted publickey for core from 139.178.68.195 port 55354 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:17:32.616965 sshd-session[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:17:32.621942 systemd-logind[1972]: New session 5 of user core. Dec 16 13:17:32.633561 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:17:32.766690 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:17:32.767164 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:17:32.783679 sudo[2321]: pam_unix(sudo:session): session closed for user root Dec 16 13:17:32.807298 sshd[2320]: Connection closed by 139.178.68.195 port 55354 Dec 16 13:17:32.808128 sshd-session[2317]: pam_unix(sshd:session): session closed for user core Dec 16 13:17:32.813192 systemd[1]: sshd@4-172.31.26.5:22-139.178.68.195:55354.service: Deactivated successfully. Dec 16 13:17:32.817073 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:17:32.819825 systemd-logind[1972]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:17:32.821064 systemd-logind[1972]: Removed session 5. Dec 16 13:17:32.845279 systemd[1]: Started sshd@5-172.31.26.5:22-139.178.68.195:55362.service - OpenSSH per-connection server daemon (139.178.68.195:55362). Dec 16 13:17:33.025778 sshd[2327]: Accepted publickey for core from 139.178.68.195 port 55362 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:17:33.027568 sshd-session[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:17:33.033083 systemd-logind[1972]: New session 6 of user core. Dec 16 13:17:33.038545 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:17:33.141207 sudo[2332]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:17:33.141667 sudo[2332]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:17:33.148259 sudo[2332]: pam_unix(sudo:session): session closed for user root Dec 16 13:17:33.154844 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:17:33.155505 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:17:33.166832 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:17:33.209041 augenrules[2354]: No rules Dec 16 13:17:33.210490 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:17:33.210764 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:17:33.212131 sudo[2331]: pam_unix(sudo:session): session closed for user root Dec 16 13:17:33.236273 sshd[2330]: Connection closed by 139.178.68.195 port 55362 Dec 16 13:17:33.236789 sshd-session[2327]: pam_unix(sshd:session): session closed for user core Dec 16 13:17:33.241727 systemd[1]: sshd@5-172.31.26.5:22-139.178.68.195:55362.service: Deactivated successfully. Dec 16 13:17:33.243913 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:17:33.244893 systemd-logind[1972]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:17:33.246690 systemd-logind[1972]: Removed session 6. Dec 16 13:17:33.270221 systemd[1]: Started sshd@6-172.31.26.5:22-139.178.68.195:55378.service - OpenSSH per-connection server daemon (139.178.68.195:55378). Dec 16 13:17:33.439331 sshd[2363]: Accepted publickey for core from 139.178.68.195 port 55378 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:17:33.440637 sshd-session[2363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:17:33.447205 systemd-logind[1972]: New session 7 of user core. Dec 16 13:17:33.455532 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:17:33.550776 sudo[2367]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:17:33.551084 sudo[2367]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:17:34.160102 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:17:34.173829 (dockerd)[2386]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:17:34.716735 dockerd[2386]: time="2025-12-16T13:17:34.716671830Z" level=info msg="Starting up" Dec 16 13:17:34.718183 dockerd[2386]: time="2025-12-16T13:17:34.718150261Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:17:34.730435 dockerd[2386]: time="2025-12-16T13:17:34.730365042Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:17:34.869607 dockerd[2386]: time="2025-12-16T13:17:34.869422341Z" level=info msg="Loading containers: start." Dec 16 13:17:34.894351 kernel: Initializing XFRM netlink socket Dec 16 13:17:35.129369 (udev-worker)[2407]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:17:35.180606 systemd-networkd[1860]: docker0: Link UP Dec 16 13:17:35.191057 dockerd[2386]: time="2025-12-16T13:17:35.190985372Z" level=info msg="Loading containers: done." Dec 16 13:17:35.218505 dockerd[2386]: time="2025-12-16T13:17:35.218394036Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:17:35.218689 dockerd[2386]: time="2025-12-16T13:17:35.218551677Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:17:35.218689 dockerd[2386]: time="2025-12-16T13:17:35.218659030Z" level=info msg="Initializing buildkit" Dec 16 13:17:35.259112 dockerd[2386]: time="2025-12-16T13:17:35.259057400Z" level=info msg="Completed buildkit initialization" Dec 16 13:17:35.267310 dockerd[2386]: time="2025-12-16T13:17:35.267252980Z" level=info msg="Daemon has completed initialization" Dec 16 13:17:35.267508 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:17:35.268078 dockerd[2386]: time="2025-12-16T13:17:35.267492438Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:17:36.632013 systemd-resolved[1861]: Clock change detected. Flushing caches. Dec 16 13:17:37.184482 containerd[1997]: time="2025-12-16T13:17:37.184431017Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 13:17:37.736803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2999669603.mount: Deactivated successfully. Dec 16 13:17:39.302969 containerd[1997]: time="2025-12-16T13:17:39.302916454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:39.304081 containerd[1997]: time="2025-12-16T13:17:39.304028322Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=29072183" Dec 16 13:17:39.306115 containerd[1997]: time="2025-12-16T13:17:39.305468924Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:39.307975 containerd[1997]: time="2025-12-16T13:17:39.307927135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:39.309348 containerd[1997]: time="2025-12-16T13:17:39.309107065Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 2.12463329s" Dec 16 13:17:39.309348 containerd[1997]: time="2025-12-16T13:17:39.309154354Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 16 13:17:39.309727 containerd[1997]: time="2025-12-16T13:17:39.309700407Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 13:17:40.934025 containerd[1997]: time="2025-12-16T13:17:40.933964432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:40.936287 containerd[1997]: time="2025-12-16T13:17:40.936081120Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24992010" Dec 16 13:17:40.938540 containerd[1997]: time="2025-12-16T13:17:40.938503375Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:40.942401 containerd[1997]: time="2025-12-16T13:17:40.942364082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:40.943675 containerd[1997]: time="2025-12-16T13:17:40.943038611Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 1.633303965s" Dec 16 13:17:40.943675 containerd[1997]: time="2025-12-16T13:17:40.943089717Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 16 13:17:40.943936 containerd[1997]: time="2025-12-16T13:17:40.943899664Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 13:17:42.353114 containerd[1997]: time="2025-12-16T13:17:42.353022451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:42.355300 containerd[1997]: time="2025-12-16T13:17:42.355239721Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19404248" Dec 16 13:17:42.358174 containerd[1997]: time="2025-12-16T13:17:42.358097892Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:42.362159 containerd[1997]: time="2025-12-16T13:17:42.362089393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:42.363407 containerd[1997]: time="2025-12-16T13:17:42.363221278Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 1.419276634s" Dec 16 13:17:42.363407 containerd[1997]: time="2025-12-16T13:17:42.363272285Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 16 13:17:42.363917 containerd[1997]: time="2025-12-16T13:17:42.363818256Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 13:17:42.390308 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:17:42.392208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:17:42.694908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:17:42.709602 (kubelet)[2671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:17:42.765419 kubelet[2671]: E1216 13:17:42.765257 2671 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:17:42.769873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:17:42.770083 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:17:42.770660 systemd[1]: kubelet.service: Consumed 185ms CPU time, 109.7M memory peak. Dec 16 13:17:43.487541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount439085251.mount: Deactivated successfully. Dec 16 13:17:44.082198 containerd[1997]: time="2025-12-16T13:17:44.082151719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:44.084419 containerd[1997]: time="2025-12-16T13:17:44.084381700Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31161423" Dec 16 13:17:44.086911 containerd[1997]: time="2025-12-16T13:17:44.086829934Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:44.096131 containerd[1997]: time="2025-12-16T13:17:44.096036918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:44.099362 containerd[1997]: time="2025-12-16T13:17:44.099309030Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 1.735434458s" Dec 16 13:17:44.099362 containerd[1997]: time="2025-12-16T13:17:44.099364315Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 16 13:17:44.099955 containerd[1997]: time="2025-12-16T13:17:44.099926107Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 13:17:44.610364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2825263961.mount: Deactivated successfully. Dec 16 13:17:45.709207 containerd[1997]: time="2025-12-16T13:17:45.709154487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:45.711143 containerd[1997]: time="2025-12-16T13:17:45.711091056Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Dec 16 13:17:45.713569 containerd[1997]: time="2025-12-16T13:17:45.713488861Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:45.717651 containerd[1997]: time="2025-12-16T13:17:45.717496468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:45.718412 containerd[1997]: time="2025-12-16T13:17:45.718375940Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.6184112s" Dec 16 13:17:45.718412 containerd[1997]: time="2025-12-16T13:17:45.718413733Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 16 13:17:45.718979 containerd[1997]: time="2025-12-16T13:17:45.718953282Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 13:17:46.188483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount608579783.mount: Deactivated successfully. Dec 16 13:17:46.202573 containerd[1997]: time="2025-12-16T13:17:46.202510244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:17:46.204766 containerd[1997]: time="2025-12-16T13:17:46.204443622Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 16 13:17:46.207106 containerd[1997]: time="2025-12-16T13:17:46.207034487Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:17:46.211692 containerd[1997]: time="2025-12-16T13:17:46.211550860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:17:46.213251 containerd[1997]: time="2025-12-16T13:17:46.212088680Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 493.105851ms" Dec 16 13:17:46.213251 containerd[1997]: time="2025-12-16T13:17:46.212181683Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 13:17:46.213581 containerd[1997]: time="2025-12-16T13:17:46.213555615Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 13:17:46.771980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4198822518.mount: Deactivated successfully. Dec 16 13:17:49.205026 containerd[1997]: time="2025-12-16T13:17:49.204964465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:49.206922 containerd[1997]: time="2025-12-16T13:17:49.206874066Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Dec 16 13:17:49.209602 containerd[1997]: time="2025-12-16T13:17:49.209543092Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:49.214718 containerd[1997]: time="2025-12-16T13:17:49.213336027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:17:49.214718 containerd[1997]: time="2025-12-16T13:17:49.214540366Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.000948937s" Dec 16 13:17:49.214718 containerd[1997]: time="2025-12-16T13:17:49.214583265Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 16 13:17:52.006377 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:17:52.006639 systemd[1]: kubelet.service: Consumed 185ms CPU time, 109.7M memory peak. Dec 16 13:17:52.009495 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:17:52.048331 systemd[1]: Reload requested from client PID 2824 ('systemctl') (unit session-7.scope)... Dec 16 13:17:52.048353 systemd[1]: Reloading... Dec 16 13:17:52.179089 zram_generator::config[2878]: No configuration found. Dec 16 13:17:52.456709 systemd[1]: Reloading finished in 407 ms. Dec 16 13:17:52.520766 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:17:52.520858 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:17:52.521405 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:17:52.521453 systemd[1]: kubelet.service: Consumed 145ms CPU time, 97.8M memory peak. Dec 16 13:17:52.524459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:17:52.760747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:17:52.774524 (kubelet)[2932]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:17:52.825466 kubelet[2932]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:17:52.825466 kubelet[2932]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:17:52.825466 kubelet[2932]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:17:52.825918 kubelet[2932]: I1216 13:17:52.825554 2932 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:17:53.469016 kubelet[2932]: I1216 13:17:53.468954 2932 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 13:17:53.469016 kubelet[2932]: I1216 13:17:53.468995 2932 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:17:53.469430 kubelet[2932]: I1216 13:17:53.469401 2932 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 13:17:53.513100 kubelet[2932]: I1216 13:17:53.512794 2932 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:17:53.526134 kubelet[2932]: E1216 13:17:53.525835 2932 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.26.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.26.5:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:17:53.532918 kubelet[2932]: I1216 13:17:53.532881 2932 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:17:53.539232 kubelet[2932]: I1216 13:17:53.539173 2932 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:17:53.544255 kubelet[2932]: I1216 13:17:53.543670 2932 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:17:53.544255 kubelet[2932]: I1216 13:17:53.543907 2932 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:17:53.546437 kubelet[2932]: I1216 13:17:53.546395 2932 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:17:53.546437 kubelet[2932]: I1216 13:17:53.546430 2932 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 13:17:53.547926 kubelet[2932]: I1216 13:17:53.547874 2932 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:17:53.553290 kubelet[2932]: I1216 13:17:53.553118 2932 kubelet.go:446] "Attempting to sync node with API server" Dec 16 13:17:53.553290 kubelet[2932]: I1216 13:17:53.553163 2932 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:17:53.553290 kubelet[2932]: I1216 13:17:53.553191 2932 kubelet.go:352] "Adding apiserver pod source" Dec 16 13:17:53.553290 kubelet[2932]: I1216 13:17:53.553202 2932 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:17:53.563659 kubelet[2932]: W1216 13:17:53.563310 2932 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-5&limit=500&resourceVersion=0": dial tcp 172.31.26.5:6443: connect: connection refused Dec 16 13:17:53.563659 kubelet[2932]: E1216 13:17:53.563383 2932 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.26.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-5&limit=500&resourceVersion=0\": dial tcp 172.31.26.5:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:17:53.565204 kubelet[2932]: I1216 13:17:53.565166 2932 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:17:53.568165 kubelet[2932]: W1216 13:17:53.568119 2932 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.26.5:6443: connect: connection refused Dec 16 13:17:53.568358 kubelet[2932]: E1216 13:17:53.568301 2932 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.26.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.5:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:17:53.569390 kubelet[2932]: I1216 13:17:53.569277 2932 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 13:17:53.570119 kubelet[2932]: W1216 13:17:53.570096 2932 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:17:53.571129 kubelet[2932]: I1216 13:17:53.571104 2932 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:17:53.571217 kubelet[2932]: I1216 13:17:53.571143 2932 server.go:1287] "Started kubelet" Dec 16 13:17:53.571356 kubelet[2932]: I1216 13:17:53.571293 2932 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:17:53.573255 kubelet[2932]: I1216 13:17:53.573181 2932 server.go:479] "Adding debug handlers to kubelet server" Dec 16 13:17:53.575477 kubelet[2932]: I1216 13:17:53.575415 2932 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:17:53.575896 kubelet[2932]: I1216 13:17:53.575662 2932 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:17:53.576048 kubelet[2932]: I1216 13:17:53.576012 2932 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:17:53.583686 kubelet[2932]: I1216 13:17:53.583281 2932 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:17:53.584823 kubelet[2932]: I1216 13:17:53.584741 2932 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:17:53.584986 kubelet[2932]: E1216 13:17:53.582976 2932 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.5:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.5:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-5.1881b493483d5bed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-5,UID:ip-172-31-26-5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-5,},FirstTimestamp:2025-12-16 13:17:53.571118061 +0000 UTC m=+0.792888423,LastTimestamp:2025-12-16 13:17:53.571118061 +0000 UTC m=+0.792888423,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-5,}" Dec 16 13:17:53.586129 kubelet[2932]: E1216 13:17:53.584976 2932 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-5\" not found" Dec 16 13:17:53.586129 kubelet[2932]: E1216 13:17:53.585997 2932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-5?timeout=10s\": dial tcp 172.31.26.5:6443: connect: connection refused" interval="200ms" Dec 16 13:17:53.587774 kubelet[2932]: I1216 13:17:53.587753 2932 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:17:53.587860 kubelet[2932]: I1216 13:17:53.587805 2932 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:17:53.598752 kubelet[2932]: W1216 13:17:53.598707 2932 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.5:6443: connect: connection refused Dec 16 13:17:53.601090 kubelet[2932]: E1216 13:17:53.599208 2932 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.26.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.5:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:17:53.601090 kubelet[2932]: I1216 13:17:53.599329 2932 factory.go:221] Registration of the containerd container factory successfully Dec 16 13:17:53.601090 kubelet[2932]: I1216 13:17:53.599337 2932 factory.go:221] Registration of the systemd container factory successfully Dec 16 13:17:53.601090 kubelet[2932]: I1216 13:17:53.599410 2932 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:17:53.603316 kubelet[2932]: I1216 13:17:53.603279 2932 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 13:17:53.604533 kubelet[2932]: I1216 13:17:53.604499 2932 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 13:17:53.604533 kubelet[2932]: I1216 13:17:53.604534 2932 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 13:17:53.604639 kubelet[2932]: I1216 13:17:53.604556 2932 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:17:53.604639 kubelet[2932]: I1216 13:17:53.604563 2932 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 13:17:53.604639 kubelet[2932]: E1216 13:17:53.604609 2932 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:17:53.612935 kubelet[2932]: W1216 13:17:53.612891 2932 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.5:6443: connect: connection refused Dec 16 13:17:53.613125 kubelet[2932]: E1216 13:17:53.613109 2932 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.26.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.5:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:17:53.621760 kubelet[2932]: E1216 13:17:53.621712 2932 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:17:53.630886 kubelet[2932]: I1216 13:17:53.630851 2932 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:17:53.630886 kubelet[2932]: I1216 13:17:53.630874 2932 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:17:53.630886 kubelet[2932]: I1216 13:17:53.630894 2932 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:17:53.634705 kubelet[2932]: I1216 13:17:53.634656 2932 policy_none.go:49] "None policy: Start" Dec 16 13:17:53.634705 kubelet[2932]: I1216 13:17:53.634686 2932 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:17:53.634705 kubelet[2932]: I1216 13:17:53.634698 2932 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:17:53.641421 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:17:53.661955 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:17:53.665940 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:17:53.674172 kubelet[2932]: I1216 13:17:53.674141 2932 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 13:17:53.674396 kubelet[2932]: I1216 13:17:53.674380 2932 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:17:53.674959 kubelet[2932]: I1216 13:17:53.674401 2932 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:17:53.674959 kubelet[2932]: I1216 13:17:53.674946 2932 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:17:53.677266 kubelet[2932]: E1216 13:17:53.677240 2932 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:17:53.677438 kubelet[2932]: E1216 13:17:53.677293 2932 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-5\" not found" Dec 16 13:17:53.716939 systemd[1]: Created slice kubepods-burstable-pod9e1b5d91b7ec810355cb8af2b385422f.slice - libcontainer container kubepods-burstable-pod9e1b5d91b7ec810355cb8af2b385422f.slice. Dec 16 13:17:53.727243 kubelet[2932]: E1216 13:17:53.725143 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-5\" not found" node="ip-172-31-26-5" Dec 16 13:17:53.729999 systemd[1]: Created slice kubepods-burstable-pod4dec78b87066f3a17da7b842b875b2c9.slice - libcontainer container kubepods-burstable-pod4dec78b87066f3a17da7b842b875b2c9.slice. Dec 16 13:17:53.732954 kubelet[2932]: E1216 13:17:53.732720 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-5\" not found" node="ip-172-31-26-5" Dec 16 13:17:53.735468 systemd[1]: Created slice kubepods-burstable-pod3488d11dd50c593dc67bfab873570bcc.slice - libcontainer container kubepods-burstable-pod3488d11dd50c593dc67bfab873570bcc.slice. Dec 16 13:17:53.737842 kubelet[2932]: E1216 13:17:53.737816 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-5\" not found" node="ip-172-31-26-5" Dec 16 13:17:53.777089 kubelet[2932]: I1216 13:17:53.776651 2932 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-5" Dec 16 13:17:53.777089 kubelet[2932]: E1216 13:17:53.777045 2932 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.5:6443/api/v1/nodes\": dial tcp 172.31.26.5:6443: connect: connection refused" node="ip-172-31-26-5" Dec 16 13:17:53.786699 kubelet[2932]: E1216 13:17:53.786651 2932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-5?timeout=10s\": dial tcp 172.31.26.5:6443: connect: connection refused" interval="400ms" Dec 16 13:17:53.889546 kubelet[2932]: I1216 13:17:53.889489 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4dec78b87066f3a17da7b842b875b2c9-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-5\" (UID: \"4dec78b87066f3a17da7b842b875b2c9\") " pod="kube-system/kube-controller-manager-ip-172-31-26-5" Dec 16 13:17:53.889546 kubelet[2932]: I1216 13:17:53.889544 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3488d11dd50c593dc67bfab873570bcc-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-5\" (UID: \"3488d11dd50c593dc67bfab873570bcc\") " pod="kube-system/kube-scheduler-ip-172-31-26-5" Dec 16 13:17:53.890083 kubelet[2932]: I1216 13:17:53.889568 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e1b5d91b7ec810355cb8af2b385422f-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-5\" (UID: \"9e1b5d91b7ec810355cb8af2b385422f\") " pod="kube-system/kube-apiserver-ip-172-31-26-5" Dec 16 13:17:53.890083 kubelet[2932]: I1216 13:17:53.889590 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4dec78b87066f3a17da7b842b875b2c9-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-5\" (UID: \"4dec78b87066f3a17da7b842b875b2c9\") " pod="kube-system/kube-controller-manager-ip-172-31-26-5" Dec 16 13:17:53.890083 kubelet[2932]: I1216 13:17:53.889612 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4dec78b87066f3a17da7b842b875b2c9-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-5\" (UID: \"4dec78b87066f3a17da7b842b875b2c9\") " pod="kube-system/kube-controller-manager-ip-172-31-26-5" Dec 16 13:17:53.890083 kubelet[2932]: I1216 13:17:53.889634 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4dec78b87066f3a17da7b842b875b2c9-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-5\" (UID: \"4dec78b87066f3a17da7b842b875b2c9\") " pod="kube-system/kube-controller-manager-ip-172-31-26-5" Dec 16 13:17:53.890083 kubelet[2932]: I1216 13:17:53.889658 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4dec78b87066f3a17da7b842b875b2c9-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-5\" (UID: \"4dec78b87066f3a17da7b842b875b2c9\") " pod="kube-system/kube-controller-manager-ip-172-31-26-5" Dec 16 13:17:53.890227 kubelet[2932]: I1216 13:17:53.889683 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e1b5d91b7ec810355cb8af2b385422f-ca-certs\") pod \"kube-apiserver-ip-172-31-26-5\" (UID: \"9e1b5d91b7ec810355cb8af2b385422f\") " pod="kube-system/kube-apiserver-ip-172-31-26-5" Dec 16 13:17:53.890227 kubelet[2932]: I1216 13:17:53.889713 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e1b5d91b7ec810355cb8af2b385422f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-5\" (UID: \"9e1b5d91b7ec810355cb8af2b385422f\") " pod="kube-system/kube-apiserver-ip-172-31-26-5" Dec 16 13:17:53.980016 kubelet[2932]: I1216 13:17:53.979896 2932 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-5" Dec 16 13:17:53.981156 kubelet[2932]: E1216 13:17:53.980388 2932 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.5:6443/api/v1/nodes\": dial tcp 172.31.26.5:6443: connect: connection refused" node="ip-172-31-26-5" Dec 16 13:17:54.035426 containerd[1997]: time="2025-12-16T13:17:54.035316369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-5,Uid:9e1b5d91b7ec810355cb8af2b385422f,Namespace:kube-system,Attempt:0,}" Dec 16 13:17:54.036760 containerd[1997]: time="2025-12-16T13:17:54.035316584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-5,Uid:4dec78b87066f3a17da7b842b875b2c9,Namespace:kube-system,Attempt:0,}" Dec 16 13:17:54.039605 containerd[1997]: time="2025-12-16T13:17:54.039568867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-5,Uid:3488d11dd50c593dc67bfab873570bcc,Namespace:kube-system,Attempt:0,}" Dec 16 13:17:54.186360 containerd[1997]: time="2025-12-16T13:17:54.186267482Z" level=info msg="connecting to shim b3f4c803d1fe972956d7d7841f758990bd6e1a2a3e99dd55445178e628e1cdfa" address="unix:///run/containerd/s/116f1cc962175d99bbb50ab906538c850725d7af7f5e708288d180433877609f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:17:54.187901 kubelet[2932]: E1216 13:17:54.187808 2932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-5?timeout=10s\": dial tcp 172.31.26.5:6443: connect: connection refused" interval="800ms" Dec 16 13:17:54.196118 containerd[1997]: time="2025-12-16T13:17:54.196010921Z" level=info msg="connecting to shim 1c56f0250e1708f1359935890a3cda2f20397f4123fd051c986129eac98a34c3" address="unix:///run/containerd/s/c48623898b99735315fa7c2126345cac4fb62742ba674ba2801bf263032d7fa8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:17:54.206038 containerd[1997]: time="2025-12-16T13:17:54.203872440Z" level=info msg="connecting to shim dce4e6355a576114d8a3d4975635c48a9790298fd9b9e7acd99cb1c77227bdff" address="unix:///run/containerd/s/d6a2e17957bc7d25295886fe7931416d7f619b47e1020f42cad61a0fe774beac" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:17:54.338438 systemd[1]: Started cri-containerd-b3f4c803d1fe972956d7d7841f758990bd6e1a2a3e99dd55445178e628e1cdfa.scope - libcontainer container b3f4c803d1fe972956d7d7841f758990bd6e1a2a3e99dd55445178e628e1cdfa. Dec 16 13:17:54.351414 systemd[1]: Started cri-containerd-1c56f0250e1708f1359935890a3cda2f20397f4123fd051c986129eac98a34c3.scope - libcontainer container 1c56f0250e1708f1359935890a3cda2f20397f4123fd051c986129eac98a34c3. Dec 16 13:17:54.355593 systemd[1]: Started cri-containerd-dce4e6355a576114d8a3d4975635c48a9790298fd9b9e7acd99cb1c77227bdff.scope - libcontainer container dce4e6355a576114d8a3d4975635c48a9790298fd9b9e7acd99cb1c77227bdff. Dec 16 13:17:54.388211 kubelet[2932]: I1216 13:17:54.388180 2932 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-5" Dec 16 13:17:54.392848 kubelet[2932]: E1216 13:17:54.392794 2932 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.5:6443/api/v1/nodes\": dial tcp 172.31.26.5:6443: connect: connection refused" node="ip-172-31-26-5" Dec 16 13:17:54.476841 containerd[1997]: time="2025-12-16T13:17:54.476780461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-5,Uid:9e1b5d91b7ec810355cb8af2b385422f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3f4c803d1fe972956d7d7841f758990bd6e1a2a3e99dd55445178e628e1cdfa\"" Dec 16 13:17:54.483460 containerd[1997]: time="2025-12-16T13:17:54.483387519Z" level=info msg="CreateContainer within sandbox \"b3f4c803d1fe972956d7d7841f758990bd6e1a2a3e99dd55445178e628e1cdfa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:17:54.486105 kubelet[2932]: W1216 13:17:54.484482 2932 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.5:6443: connect: connection refused Dec 16 13:17:54.486904 kubelet[2932]: E1216 13:17:54.486865 2932 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.26.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.5:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:17:54.495929 containerd[1997]: time="2025-12-16T13:17:54.495883508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-5,Uid:4dec78b87066f3a17da7b842b875b2c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c56f0250e1708f1359935890a3cda2f20397f4123fd051c986129eac98a34c3\"" Dec 16 13:17:54.509678 containerd[1997]: time="2025-12-16T13:17:54.509284199Z" level=info msg="CreateContainer within sandbox \"1c56f0250e1708f1359935890a3cda2f20397f4123fd051c986129eac98a34c3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:17:54.513599 containerd[1997]: time="2025-12-16T13:17:54.513566457Z" level=info msg="Container 3a9066769828feb8c1b26b7c7fcaf1ec61902168824d25c879f2665c914c8e2f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:17:54.533960 containerd[1997]: time="2025-12-16T13:17:54.533922646Z" level=info msg="CreateContainer within sandbox \"b3f4c803d1fe972956d7d7841f758990bd6e1a2a3e99dd55445178e628e1cdfa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3a9066769828feb8c1b26b7c7fcaf1ec61902168824d25c879f2665c914c8e2f\"" Dec 16 13:17:54.535044 containerd[1997]: time="2025-12-16T13:17:54.534745573Z" level=info msg="StartContainer for \"3a9066769828feb8c1b26b7c7fcaf1ec61902168824d25c879f2665c914c8e2f\"" Dec 16 13:17:54.537009 containerd[1997]: time="2025-12-16T13:17:54.536977913Z" level=info msg="Container 8e527e8a47a9f8f19dd0c1401e0f621b4e05f5c98e29deba858437d5584d6acf: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:17:54.537158 containerd[1997]: time="2025-12-16T13:17:54.537138002Z" level=info msg="connecting to shim 3a9066769828feb8c1b26b7c7fcaf1ec61902168824d25c879f2665c914c8e2f" address="unix:///run/containerd/s/116f1cc962175d99bbb50ab906538c850725d7af7f5e708288d180433877609f" protocol=ttrpc version=3 Dec 16 13:17:54.538866 containerd[1997]: time="2025-12-16T13:17:54.538839184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-5,Uid:3488d11dd50c593dc67bfab873570bcc,Namespace:kube-system,Attempt:0,} returns sandbox id \"dce4e6355a576114d8a3d4975635c48a9790298fd9b9e7acd99cb1c77227bdff\"" Dec 16 13:17:54.541774 containerd[1997]: time="2025-12-16T13:17:54.541738884Z" level=info msg="CreateContainer within sandbox \"dce4e6355a576114d8a3d4975635c48a9790298fd9b9e7acd99cb1c77227bdff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:17:54.552685 containerd[1997]: time="2025-12-16T13:17:54.552611909Z" level=info msg="CreateContainer within sandbox \"1c56f0250e1708f1359935890a3cda2f20397f4123fd051c986129eac98a34c3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8e527e8a47a9f8f19dd0c1401e0f621b4e05f5c98e29deba858437d5584d6acf\"" Dec 16 13:17:54.554084 containerd[1997]: time="2025-12-16T13:17:54.553338620Z" level=info msg="StartContainer for \"8e527e8a47a9f8f19dd0c1401e0f621b4e05f5c98e29deba858437d5584d6acf\"" Dec 16 13:17:54.554409 containerd[1997]: time="2025-12-16T13:17:54.554388464Z" level=info msg="connecting to shim 8e527e8a47a9f8f19dd0c1401e0f621b4e05f5c98e29deba858437d5584d6acf" address="unix:///run/containerd/s/c48623898b99735315fa7c2126345cac4fb62742ba674ba2801bf263032d7fa8" protocol=ttrpc version=3 Dec 16 13:17:54.559434 systemd[1]: Started cri-containerd-3a9066769828feb8c1b26b7c7fcaf1ec61902168824d25c879f2665c914c8e2f.scope - libcontainer container 3a9066769828feb8c1b26b7c7fcaf1ec61902168824d25c879f2665c914c8e2f. Dec 16 13:17:54.559808 containerd[1997]: time="2025-12-16T13:17:54.559685993Z" level=info msg="Container e78b127957bface6463a3ee73bceddb6a06917153fd8f6c0455c3d8e1552234b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:17:54.584446 containerd[1997]: time="2025-12-16T13:17:54.584398955Z" level=info msg="CreateContainer within sandbox \"dce4e6355a576114d8a3d4975635c48a9790298fd9b9e7acd99cb1c77227bdff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e78b127957bface6463a3ee73bceddb6a06917153fd8f6c0455c3d8e1552234b\"" Dec 16 13:17:54.585586 containerd[1997]: time="2025-12-16T13:17:54.585555256Z" level=info msg="StartContainer for \"e78b127957bface6463a3ee73bceddb6a06917153fd8f6c0455c3d8e1552234b\"" Dec 16 13:17:54.587418 containerd[1997]: time="2025-12-16T13:17:54.587385812Z" level=info msg="connecting to shim e78b127957bface6463a3ee73bceddb6a06917153fd8f6c0455c3d8e1552234b" address="unix:///run/containerd/s/d6a2e17957bc7d25295886fe7931416d7f619b47e1020f42cad61a0fe774beac" protocol=ttrpc version=3 Dec 16 13:17:54.594392 systemd[1]: Started cri-containerd-8e527e8a47a9f8f19dd0c1401e0f621b4e05f5c98e29deba858437d5584d6acf.scope - libcontainer container 8e527e8a47a9f8f19dd0c1401e0f621b4e05f5c98e29deba858437d5584d6acf. Dec 16 13:17:54.623582 systemd[1]: Started cri-containerd-e78b127957bface6463a3ee73bceddb6a06917153fd8f6c0455c3d8e1552234b.scope - libcontainer container e78b127957bface6463a3ee73bceddb6a06917153fd8f6c0455c3d8e1552234b. Dec 16 13:17:54.685791 containerd[1997]: time="2025-12-16T13:17:54.685624574Z" level=info msg="StartContainer for \"3a9066769828feb8c1b26b7c7fcaf1ec61902168824d25c879f2665c914c8e2f\" returns successfully" Dec 16 13:17:54.715812 containerd[1997]: time="2025-12-16T13:17:54.715260549Z" level=info msg="StartContainer for \"8e527e8a47a9f8f19dd0c1401e0f621b4e05f5c98e29deba858437d5584d6acf\" returns successfully" Dec 16 13:17:54.756211 containerd[1997]: time="2025-12-16T13:17:54.756138264Z" level=info msg="StartContainer for \"e78b127957bface6463a3ee73bceddb6a06917153fd8f6c0455c3d8e1552234b\" returns successfully" Dec 16 13:17:54.870278 kubelet[2932]: W1216 13:17:54.870041 2932 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.26.5:6443: connect: connection refused Dec 16 13:17:54.870278 kubelet[2932]: E1216 13:17:54.870168 2932 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.26.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.5:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:17:54.903068 kubelet[2932]: W1216 13:17:54.902848 2932 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-5&limit=500&resourceVersion=0": dial tcp 172.31.26.5:6443: connect: connection refused Dec 16 13:17:54.903068 kubelet[2932]: E1216 13:17:54.902940 2932 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.26.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-5&limit=500&resourceVersion=0\": dial tcp 172.31.26.5:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:17:54.988847 kubelet[2932]: E1216 13:17:54.988774 2932 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-5?timeout=10s\": dial tcp 172.31.26.5:6443: connect: connection refused" interval="1.6s" Dec 16 13:17:55.195853 kubelet[2932]: I1216 13:17:55.195623 2932 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-5" Dec 16 13:17:55.196362 kubelet[2932]: E1216 13:17:55.196325 2932 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.5:6443/api/v1/nodes\": dial tcp 172.31.26.5:6443: connect: connection refused" node="ip-172-31-26-5" Dec 16 13:17:55.205378 kubelet[2932]: W1216 13:17:55.205242 2932 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.5:6443: connect: connection refused Dec 16 13:17:55.205378 kubelet[2932]: E1216 13:17:55.205350 2932 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.26.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.5:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:17:55.452829 kubelet[2932]: E1216 13:17:55.452428 2932 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.5:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.5:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-5.1881b493483d5bed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-5,UID:ip-172-31-26-5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-5,},FirstTimestamp:2025-12-16 13:17:53.571118061 +0000 UTC m=+0.792888423,LastTimestamp:2025-12-16 13:17:53.571118061 +0000 UTC m=+0.792888423,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-5,}" Dec 16 13:17:55.655677 kubelet[2932]: E1216 13:17:55.655649 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-5\" not found" node="ip-172-31-26-5" Dec 16 13:17:55.666575 kubelet[2932]: E1216 13:17:55.664633 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-5\" not found" node="ip-172-31-26-5" Dec 16 13:17:55.666575 kubelet[2932]: E1216 13:17:55.666431 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-5\" not found" node="ip-172-31-26-5" Dec 16 13:17:56.670297 kubelet[2932]: E1216 13:17:56.669748 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-5\" not found" node="ip-172-31-26-5" Dec 16 13:17:56.670297 kubelet[2932]: E1216 13:17:56.670044 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-5\" not found" node="ip-172-31-26-5" Dec 16 13:17:56.671525 kubelet[2932]: E1216 13:17:56.671204 2932 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-5\" not found" node="ip-172-31-26-5" Dec 16 13:17:56.798808 kubelet[2932]: I1216 13:17:56.798780 2932 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-5" Dec 16 13:17:57.622416 kubelet[2932]: E1216 13:17:57.622358 2932 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-26-5\" not found" node="ip-172-31-26-5" Dec 16 13:17:57.639641 kubelet[2932]: I1216 13:17:57.639597 2932 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-26-5" Dec 16 13:17:57.669028 kubelet[2932]: I1216 13:17:57.668997 2932 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-5" Dec 16 13:17:57.671512 kubelet[2932]: I1216 13:17:57.669636 2932 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-5" Dec 16 13:17:57.686082 kubelet[2932]: I1216 13:17:57.685993 2932 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-5" Dec 16 13:17:57.695303 kubelet[2932]: E1216 13:17:57.695234 2932 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-26-5" Dec 16 13:17:57.696383 kubelet[2932]: E1216 13:17:57.695481 2932 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-26-5" Dec 16 13:17:57.696383 kubelet[2932]: E1216 13:17:57.695758 2932 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-26-5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-26-5" Dec 16 13:17:57.696383 kubelet[2932]: I1216 13:17:57.695777 2932 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-5" Dec 16 13:17:57.698600 kubelet[2932]: E1216 13:17:57.698530 2932 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-26-5" Dec 16 13:17:57.698749 kubelet[2932]: I1216 13:17:57.698735 2932 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-5" Dec 16 13:17:57.700476 kubelet[2932]: E1216 13:17:57.700447 2932 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-26-5" Dec 16 13:17:58.573347 kubelet[2932]: I1216 13:17:58.571657 2932 apiserver.go:52] "Watching apiserver" Dec 16 13:17:58.588912 kubelet[2932]: I1216 13:17:58.588855 2932 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:17:59.519169 kubelet[2932]: I1216 13:17:59.519134 2932 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-5" Dec 16 13:17:59.716150 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 13:18:00.261326 systemd[1]: Reload requested from client PID 3205 ('systemctl') (unit session-7.scope)... Dec 16 13:18:00.261345 systemd[1]: Reloading... Dec 16 13:18:00.425131 zram_generator::config[3258]: No configuration found. Dec 16 13:18:00.704350 systemd[1]: Reloading finished in 442 ms. Dec 16 13:18:00.740047 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:18:00.762200 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:18:00.762451 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:18:00.762522 systemd[1]: kubelet.service: Consumed 1.238s CPU time, 127.9M memory peak. Dec 16 13:18:00.765274 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:18:01.021990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:18:01.060907 (kubelet)[3309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:18:01.146878 kubelet[3309]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:18:01.146878 kubelet[3309]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:18:01.146878 kubelet[3309]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:18:01.147649 kubelet[3309]: I1216 13:18:01.147021 3309 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:18:01.158002 kubelet[3309]: I1216 13:18:01.157930 3309 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 13:18:01.158384 kubelet[3309]: I1216 13:18:01.158113 3309 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:18:01.161111 kubelet[3309]: I1216 13:18:01.160262 3309 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 13:18:01.162725 kubelet[3309]: I1216 13:18:01.162544 3309 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 13:18:01.166228 kubelet[3309]: I1216 13:18:01.166193 3309 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:18:01.171563 kubelet[3309]: I1216 13:18:01.171525 3309 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:18:01.179811 kubelet[3309]: I1216 13:18:01.179749 3309 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:18:01.180855 kubelet[3309]: I1216 13:18:01.180800 3309 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:18:01.181070 kubelet[3309]: I1216 13:18:01.180844 3309 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:18:01.181201 kubelet[3309]: I1216 13:18:01.181082 3309 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:18:01.181201 kubelet[3309]: I1216 13:18:01.181100 3309 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 13:18:01.181201 kubelet[3309]: I1216 13:18:01.181159 3309 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:18:01.181350 kubelet[3309]: I1216 13:18:01.181326 3309 kubelet.go:446] "Attempting to sync node with API server" Dec 16 13:18:01.181392 kubelet[3309]: I1216 13:18:01.181350 3309 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:18:01.181392 kubelet[3309]: I1216 13:18:01.181379 3309 kubelet.go:352] "Adding apiserver pod source" Dec 16 13:18:01.181465 kubelet[3309]: I1216 13:18:01.181393 3309 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:18:01.187828 kubelet[3309]: I1216 13:18:01.187787 3309 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:18:01.189363 kubelet[3309]: I1216 13:18:01.188487 3309 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 13:18:01.189363 kubelet[3309]: I1216 13:18:01.189205 3309 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:18:01.189363 kubelet[3309]: I1216 13:18:01.189240 3309 server.go:1287] "Started kubelet" Dec 16 13:18:01.202979 kubelet[3309]: I1216 13:18:01.202944 3309 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:18:01.237508 kubelet[3309]: I1216 13:18:01.237421 3309 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:18:01.265647 kubelet[3309]: I1216 13:18:01.265600 3309 server.go:479] "Adding debug handlers to kubelet server" Dec 16 13:18:01.269266 kubelet[3309]: I1216 13:18:01.268544 3309 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:18:01.296833 kubelet[3309]: I1216 13:18:01.288915 3309 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:18:01.298092 kubelet[3309]: I1216 13:18:01.297969 3309 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:18:01.305102 kubelet[3309]: I1216 13:18:01.302906 3309 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:18:01.305102 kubelet[3309]: E1216 13:18:01.303231 3309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-5\" not found" Dec 16 13:18:01.305102 kubelet[3309]: I1216 13:18:01.303846 3309 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:18:01.305462 kubelet[3309]: I1216 13:18:01.305439 3309 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:18:01.372982 kubelet[3309]: I1216 13:18:01.372947 3309 factory.go:221] Registration of the systemd container factory successfully Dec 16 13:18:01.374177 kubelet[3309]: I1216 13:18:01.373215 3309 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:18:01.422381 kubelet[3309]: E1216 13:18:01.422340 3309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-5\" not found" Dec 16 13:18:01.427270 kubelet[3309]: I1216 13:18:01.427230 3309 factory.go:221] Registration of the containerd container factory successfully Dec 16 13:18:01.444400 kubelet[3309]: I1216 13:18:01.444361 3309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 13:18:01.489025 kubelet[3309]: E1216 13:18:01.455753 3309 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:18:01.526823 kubelet[3309]: E1216 13:18:01.525830 3309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-5\" not found" Dec 16 13:18:01.568106 kubelet[3309]: I1216 13:18:01.541720 3309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 13:18:01.571143 kubelet[3309]: I1216 13:18:01.571103 3309 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 13:18:01.571280 kubelet[3309]: I1216 13:18:01.571156 3309 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:18:01.571280 kubelet[3309]: I1216 13:18:01.571166 3309 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 13:18:01.571280 kubelet[3309]: E1216 13:18:01.571229 3309 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:18:01.681655 kubelet[3309]: E1216 13:18:01.680893 3309 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 13:18:01.883717 kubelet[3309]: E1216 13:18:01.883526 3309 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 13:18:01.932602 kubelet[3309]: I1216 13:18:01.932096 3309 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:18:01.932602 kubelet[3309]: I1216 13:18:01.932118 3309 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:18:01.932602 kubelet[3309]: I1216 13:18:01.932147 3309 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:18:01.932602 kubelet[3309]: I1216 13:18:01.932382 3309 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:18:01.932602 kubelet[3309]: I1216 13:18:01.932394 3309 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:18:01.932602 kubelet[3309]: I1216 13:18:01.932418 3309 policy_none.go:49] "None policy: Start" Dec 16 13:18:01.932602 kubelet[3309]: I1216 13:18:01.932431 3309 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:18:01.932602 kubelet[3309]: I1216 13:18:01.932444 3309 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:18:01.933011 kubelet[3309]: I1216 13:18:01.932641 3309 state_mem.go:75] "Updated machine memory state" Dec 16 13:18:01.977210 kubelet[3309]: I1216 13:18:01.976416 3309 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 13:18:01.977210 kubelet[3309]: I1216 13:18:01.976672 3309 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:18:01.977210 kubelet[3309]: I1216 13:18:01.976691 3309 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:18:01.992662 kubelet[3309]: E1216 13:18:01.983287 3309 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:18:01.997084 kubelet[3309]: I1216 13:18:01.995012 3309 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:18:02.174809 kubelet[3309]: I1216 13:18:02.174389 3309 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-5" Dec 16 13:18:02.211243 kubelet[3309]: I1216 13:18:02.210848 3309 apiserver.go:52] "Watching apiserver" Dec 16 13:18:02.266473 kubelet[3309]: I1216 13:18:02.266376 3309 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-26-5" Dec 16 13:18:02.266473 kubelet[3309]: I1216 13:18:02.266445 3309 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-26-5" Dec 16 13:18:02.286007 kubelet[3309]: I1216 13:18:02.285942 3309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-5" Dec 16 13:18:02.286227 kubelet[3309]: I1216 13:18:02.286046 3309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-5" Dec 16 13:18:02.310893 kubelet[3309]: I1216 13:18:02.310839 3309 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:18:02.350721 kubelet[3309]: I1216 13:18:02.350328 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4dec78b87066f3a17da7b842b875b2c9-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-5\" (UID: \"4dec78b87066f3a17da7b842b875b2c9\") " pod="kube-system/kube-controller-manager-ip-172-31-26-5" Dec 16 13:18:02.350721 kubelet[3309]: I1216 13:18:02.350378 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4dec78b87066f3a17da7b842b875b2c9-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-5\" (UID: \"4dec78b87066f3a17da7b842b875b2c9\") " pod="kube-system/kube-controller-manager-ip-172-31-26-5" Dec 16 13:18:02.350721 kubelet[3309]: I1216 13:18:02.350407 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4dec78b87066f3a17da7b842b875b2c9-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-5\" (UID: \"4dec78b87066f3a17da7b842b875b2c9\") " pod="kube-system/kube-controller-manager-ip-172-31-26-5" Dec 16 13:18:02.350721 kubelet[3309]: I1216 13:18:02.350437 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3488d11dd50c593dc67bfab873570bcc-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-5\" (UID: \"3488d11dd50c593dc67bfab873570bcc\") " pod="kube-system/kube-scheduler-ip-172-31-26-5" Dec 16 13:18:02.350721 kubelet[3309]: I1216 13:18:02.350463 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e1b5d91b7ec810355cb8af2b385422f-ca-certs\") pod \"kube-apiserver-ip-172-31-26-5\" (UID: \"9e1b5d91b7ec810355cb8af2b385422f\") " pod="kube-system/kube-apiserver-ip-172-31-26-5" Dec 16 13:18:02.356002 kubelet[3309]: I1216 13:18:02.350488 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e1b5d91b7ec810355cb8af2b385422f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-5\" (UID: \"9e1b5d91b7ec810355cb8af2b385422f\") " pod="kube-system/kube-apiserver-ip-172-31-26-5" Dec 16 13:18:02.356002 kubelet[3309]: I1216 13:18:02.350524 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4dec78b87066f3a17da7b842b875b2c9-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-5\" (UID: \"4dec78b87066f3a17da7b842b875b2c9\") " pod="kube-system/kube-controller-manager-ip-172-31-26-5" Dec 16 13:18:02.366336 kubelet[3309]: I1216 13:18:02.366249 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4dec78b87066f3a17da7b842b875b2c9-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-5\" (UID: \"4dec78b87066f3a17da7b842b875b2c9\") " pod="kube-system/kube-controller-manager-ip-172-31-26-5" Dec 16 13:18:02.366765 kubelet[3309]: I1216 13:18:02.366724 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e1b5d91b7ec810355cb8af2b385422f-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-5\" (UID: \"9e1b5d91b7ec810355cb8af2b385422f\") " pod="kube-system/kube-apiserver-ip-172-31-26-5" Dec 16 13:18:02.535390 kubelet[3309]: I1216 13:18:02.535252 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-26-5" podStartSLOduration=3.53522994 podStartE2EDuration="3.53522994s" podCreationTimestamp="2025-12-16 13:17:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:18:02.527514346 +0000 UTC m=+1.453128549" watchObservedRunningTime="2025-12-16 13:18:02.53522994 +0000 UTC m=+1.460844137" Dec 16 13:18:02.598156 kubelet[3309]: I1216 13:18:02.598086 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-5" podStartSLOduration=0.598067638 podStartE2EDuration="598.067638ms" podCreationTimestamp="2025-12-16 13:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:18:02.567260286 +0000 UTC m=+1.492874499" watchObservedRunningTime="2025-12-16 13:18:02.598067638 +0000 UTC m=+1.523681832" Dec 16 13:18:02.707187 kubelet[3309]: I1216 13:18:02.706024 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-5" podStartSLOduration=0.706000903 podStartE2EDuration="706.000903ms" podCreationTimestamp="2025-12-16 13:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:18:02.598742997 +0000 UTC m=+1.524357207" watchObservedRunningTime="2025-12-16 13:18:02.706000903 +0000 UTC m=+1.631615106" Dec 16 13:18:04.598468 kubelet[3309]: I1216 13:18:04.597739 3309 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:18:04.598468 kubelet[3309]: I1216 13:18:04.598408 3309 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:18:04.599465 containerd[1997]: time="2025-12-16T13:18:04.598178946Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:18:05.031953 systemd[1]: Created slice kubepods-besteffort-podd8f6ad61_269e_40a2_a666_4be0ac2a9b96.slice - libcontainer container kubepods-besteffort-podd8f6ad61_269e_40a2_a666_4be0ac2a9b96.slice. Dec 16 13:18:05.107170 kubelet[3309]: I1216 13:18:05.107118 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d8f6ad61-269e-40a2-a666-4be0ac2a9b96-kube-proxy\") pod \"kube-proxy-z9q4l\" (UID: \"d8f6ad61-269e-40a2-a666-4be0ac2a9b96\") " pod="kube-system/kube-proxy-z9q4l" Dec 16 13:18:05.107170 kubelet[3309]: I1216 13:18:05.107175 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8f6ad61-269e-40a2-a666-4be0ac2a9b96-xtables-lock\") pod \"kube-proxy-z9q4l\" (UID: \"d8f6ad61-269e-40a2-a666-4be0ac2a9b96\") " pod="kube-system/kube-proxy-z9q4l" Dec 16 13:18:05.107379 kubelet[3309]: I1216 13:18:05.107201 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8f6ad61-269e-40a2-a666-4be0ac2a9b96-lib-modules\") pod \"kube-proxy-z9q4l\" (UID: \"d8f6ad61-269e-40a2-a666-4be0ac2a9b96\") " pod="kube-system/kube-proxy-z9q4l" Dec 16 13:18:05.107379 kubelet[3309]: I1216 13:18:05.107225 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlxj7\" (UniqueName: \"kubernetes.io/projected/d8f6ad61-269e-40a2-a666-4be0ac2a9b96-kube-api-access-tlxj7\") pod \"kube-proxy-z9q4l\" (UID: \"d8f6ad61-269e-40a2-a666-4be0ac2a9b96\") " pod="kube-system/kube-proxy-z9q4l" Dec 16 13:18:05.346756 containerd[1997]: time="2025-12-16T13:18:05.346630265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z9q4l,Uid:d8f6ad61-269e-40a2-a666-4be0ac2a9b96,Namespace:kube-system,Attempt:0,}" Dec 16 13:18:05.402048 containerd[1997]: time="2025-12-16T13:18:05.401998295Z" level=info msg="connecting to shim 7564284a7742cad1d9346e1963417714d03895650f32fc99fd15d5d4a13663c6" address="unix:///run/containerd/s/00e4b0d13feb81ba942f283ea1d7a96b620a9d0b1fb6d4492afe4212384a4508" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:18:05.457281 systemd[1]: Started cri-containerd-7564284a7742cad1d9346e1963417714d03895650f32fc99fd15d5d4a13663c6.scope - libcontainer container 7564284a7742cad1d9346e1963417714d03895650f32fc99fd15d5d4a13663c6. Dec 16 13:18:05.527319 containerd[1997]: time="2025-12-16T13:18:05.527274332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z9q4l,Uid:d8f6ad61-269e-40a2-a666-4be0ac2a9b96,Namespace:kube-system,Attempt:0,} returns sandbox id \"7564284a7742cad1d9346e1963417714d03895650f32fc99fd15d5d4a13663c6\"" Dec 16 13:18:05.537591 containerd[1997]: time="2025-12-16T13:18:05.537541562Z" level=info msg="CreateContainer within sandbox \"7564284a7742cad1d9346e1963417714d03895650f32fc99fd15d5d4a13663c6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:18:05.564360 containerd[1997]: time="2025-12-16T13:18:05.564303012Z" level=info msg="Container 2f27301dcc011c4621f168ea9eb5753c921a8999aa8e720cfddb5d10afabab6a: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:18:05.579267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1794192018.mount: Deactivated successfully. Dec 16 13:18:05.589394 containerd[1997]: time="2025-12-16T13:18:05.589133233Z" level=info msg="CreateContainer within sandbox \"7564284a7742cad1d9346e1963417714d03895650f32fc99fd15d5d4a13663c6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2f27301dcc011c4621f168ea9eb5753c921a8999aa8e720cfddb5d10afabab6a\"" Dec 16 13:18:05.601996 containerd[1997]: time="2025-12-16T13:18:05.601152304Z" level=info msg="StartContainer for \"2f27301dcc011c4621f168ea9eb5753c921a8999aa8e720cfddb5d10afabab6a\"" Dec 16 13:18:05.607298 containerd[1997]: time="2025-12-16T13:18:05.607249508Z" level=info msg="connecting to shim 2f27301dcc011c4621f168ea9eb5753c921a8999aa8e720cfddb5d10afabab6a" address="unix:///run/containerd/s/00e4b0d13feb81ba942f283ea1d7a96b620a9d0b1fb6d4492afe4212384a4508" protocol=ttrpc version=3 Dec 16 13:18:05.639023 systemd[1]: Started cri-containerd-2f27301dcc011c4621f168ea9eb5753c921a8999aa8e720cfddb5d10afabab6a.scope - libcontainer container 2f27301dcc011c4621f168ea9eb5753c921a8999aa8e720cfddb5d10afabab6a. Dec 16 13:18:05.652378 systemd[1]: Created slice kubepods-besteffort-pod06589e1a_b43c_4aac_8144_493cd7ae2612.slice - libcontainer container kubepods-besteffort-pod06589e1a_b43c_4aac_8144_493cd7ae2612.slice. Dec 16 13:18:05.715277 kubelet[3309]: I1216 13:18:05.714117 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/06589e1a-b43c-4aac-8144-493cd7ae2612-var-lib-calico\") pod \"tigera-operator-7dcd859c48-m8zdq\" (UID: \"06589e1a-b43c-4aac-8144-493cd7ae2612\") " pod="tigera-operator/tigera-operator-7dcd859c48-m8zdq" Dec 16 13:18:05.721340 kubelet[3309]: I1216 13:18:05.720167 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkqpc\" (UniqueName: \"kubernetes.io/projected/06589e1a-b43c-4aac-8144-493cd7ae2612-kube-api-access-zkqpc\") pod \"tigera-operator-7dcd859c48-m8zdq\" (UID: \"06589e1a-b43c-4aac-8144-493cd7ae2612\") " pod="tigera-operator/tigera-operator-7dcd859c48-m8zdq" Dec 16 13:18:05.807435 containerd[1997]: time="2025-12-16T13:18:05.807388602Z" level=info msg="StartContainer for \"2f27301dcc011c4621f168ea9eb5753c921a8999aa8e720cfddb5d10afabab6a\" returns successfully" Dec 16 13:18:05.961005 containerd[1997]: time="2025-12-16T13:18:05.960871253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-m8zdq,Uid:06589e1a-b43c-4aac-8144-493cd7ae2612,Namespace:tigera-operator,Attempt:0,}" Dec 16 13:18:06.037222 containerd[1997]: time="2025-12-16T13:18:06.037167307Z" level=info msg="connecting to shim bf9eb17d62a9146217dae2e044d3d27bec1311835a5975a5c1b6c015a57faf8f" address="unix:///run/containerd/s/483786798ebf588f00a1fc132bcf1a4bb806ab9e31c1d46771b556a3118f2539" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:18:06.078757 systemd[1]: Started cri-containerd-bf9eb17d62a9146217dae2e044d3d27bec1311835a5975a5c1b6c015a57faf8f.scope - libcontainer container bf9eb17d62a9146217dae2e044d3d27bec1311835a5975a5c1b6c015a57faf8f. Dec 16 13:18:06.146151 containerd[1997]: time="2025-12-16T13:18:06.145881209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-m8zdq,Uid:06589e1a-b43c-4aac-8144-493cd7ae2612,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bf9eb17d62a9146217dae2e044d3d27bec1311835a5975a5c1b6c015a57faf8f\"" Dec 16 13:18:06.156546 containerd[1997]: time="2025-12-16T13:18:06.156416706Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 13:18:07.731719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount703342693.mount: Deactivated successfully. Dec 16 13:18:08.088707 kubelet[3309]: I1216 13:18:08.088340 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z9q4l" podStartSLOduration=4.088324318 podStartE2EDuration="4.088324318s" podCreationTimestamp="2025-12-16 13:18:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:18:05.868217183 +0000 UTC m=+4.793831385" watchObservedRunningTime="2025-12-16 13:18:08.088324318 +0000 UTC m=+7.013938518" Dec 16 13:18:09.075646 containerd[1997]: time="2025-12-16T13:18:09.075571307Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:18:09.077701 containerd[1997]: time="2025-12-16T13:18:09.077658492Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Dec 16 13:18:09.080142 containerd[1997]: time="2025-12-16T13:18:09.080075696Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:18:09.084220 containerd[1997]: time="2025-12-16T13:18:09.083493399Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:18:09.084220 containerd[1997]: time="2025-12-16T13:18:09.084107292Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.927636408s" Dec 16 13:18:09.084220 containerd[1997]: time="2025-12-16T13:18:09.084135758Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 13:18:09.087289 containerd[1997]: time="2025-12-16T13:18:09.087254559Z" level=info msg="CreateContainer within sandbox \"bf9eb17d62a9146217dae2e044d3d27bec1311835a5975a5c1b6c015a57faf8f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 13:18:09.106382 containerd[1997]: time="2025-12-16T13:18:09.106177340Z" level=info msg="Container 1531e66d7b0b1be49212f140311999ec54780f32bc38c343ba4e453240980793: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:18:09.119043 containerd[1997]: time="2025-12-16T13:18:09.118990915Z" level=info msg="CreateContainer within sandbox \"bf9eb17d62a9146217dae2e044d3d27bec1311835a5975a5c1b6c015a57faf8f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1531e66d7b0b1be49212f140311999ec54780f32bc38c343ba4e453240980793\"" Dec 16 13:18:09.119907 containerd[1997]: time="2025-12-16T13:18:09.119860707Z" level=info msg="StartContainer for \"1531e66d7b0b1be49212f140311999ec54780f32bc38c343ba4e453240980793\"" Dec 16 13:18:09.121513 containerd[1997]: time="2025-12-16T13:18:09.121453482Z" level=info msg="connecting to shim 1531e66d7b0b1be49212f140311999ec54780f32bc38c343ba4e453240980793" address="unix:///run/containerd/s/483786798ebf588f00a1fc132bcf1a4bb806ab9e31c1d46771b556a3118f2539" protocol=ttrpc version=3 Dec 16 13:18:09.167291 systemd[1]: Started cri-containerd-1531e66d7b0b1be49212f140311999ec54780f32bc38c343ba4e453240980793.scope - libcontainer container 1531e66d7b0b1be49212f140311999ec54780f32bc38c343ba4e453240980793. Dec 16 13:18:09.214007 containerd[1997]: time="2025-12-16T13:18:09.213219385Z" level=info msg="StartContainer for \"1531e66d7b0b1be49212f140311999ec54780f32bc38c343ba4e453240980793\" returns successfully" Dec 16 13:18:11.323797 kubelet[3309]: I1216 13:18:11.323418 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-m8zdq" podStartSLOduration=3.3861987 podStartE2EDuration="6.323395527s" podCreationTimestamp="2025-12-16 13:18:05 +0000 UTC" firstStartedPulling="2025-12-16 13:18:06.147973936 +0000 UTC m=+5.073588122" lastFinishedPulling="2025-12-16 13:18:09.085170767 +0000 UTC m=+8.010784949" observedRunningTime="2025-12-16 13:18:09.88681825 +0000 UTC m=+8.812432450" watchObservedRunningTime="2025-12-16 13:18:11.323395527 +0000 UTC m=+10.249009731" Dec 16 13:18:12.953244 systemd[1]: Started sshd@7-172.31.26.5:22-54.196.33.20:52714.service - OpenSSH per-connection server daemon (54.196.33.20:52714). Dec 16 13:18:13.728134 sshd[3658]: Connection closed by 54.196.33.20 port 52714 [preauth] Dec 16 13:18:13.730679 systemd[1]: sshd@7-172.31.26.5:22-54.196.33.20:52714.service: Deactivated successfully. Dec 16 13:18:13.854188 update_engine[1975]: I20251216 13:18:13.854110 1975 update_attempter.cc:509] Updating boot flags... Dec 16 13:18:16.255541 sudo[2367]: pam_unix(sudo:session): session closed for user root Dec 16 13:18:16.278635 sshd[2366]: Connection closed by 139.178.68.195 port 55378 Dec 16 13:18:16.279787 sshd-session[2363]: pam_unix(sshd:session): session closed for user core Dec 16 13:18:16.290729 systemd[1]: sshd@6-172.31.26.5:22-139.178.68.195:55378.service: Deactivated successfully. Dec 16 13:18:16.296799 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:18:16.297337 systemd[1]: session-7.scope: Consumed 5.199s CPU time, 151.3M memory peak. Dec 16 13:18:16.303285 systemd-logind[1972]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:18:16.307753 systemd-logind[1972]: Removed session 7. Dec 16 13:18:23.118910 systemd[1]: Created slice kubepods-besteffort-podd59bf461_38bf_419e_aadc_5ff281769993.slice - libcontainer container kubepods-besteffort-podd59bf461_38bf_419e_aadc_5ff281769993.slice. Dec 16 13:18:23.145761 kubelet[3309]: I1216 13:18:23.145680 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d59bf461-38bf-419e-aadc-5ff281769993-tigera-ca-bundle\") pod \"calico-typha-746b55c996-nq8dw\" (UID: \"d59bf461-38bf-419e-aadc-5ff281769993\") " pod="calico-system/calico-typha-746b55c996-nq8dw" Dec 16 13:18:23.146264 kubelet[3309]: I1216 13:18:23.145803 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d59bf461-38bf-419e-aadc-5ff281769993-typha-certs\") pod \"calico-typha-746b55c996-nq8dw\" (UID: \"d59bf461-38bf-419e-aadc-5ff281769993\") " pod="calico-system/calico-typha-746b55c996-nq8dw" Dec 16 13:18:23.146264 kubelet[3309]: I1216 13:18:23.145845 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2j6l\" (UniqueName: \"kubernetes.io/projected/d59bf461-38bf-419e-aadc-5ff281769993-kube-api-access-c2j6l\") pod \"calico-typha-746b55c996-nq8dw\" (UID: \"d59bf461-38bf-419e-aadc-5ff281769993\") " pod="calico-system/calico-typha-746b55c996-nq8dw" Dec 16 13:18:23.308442 systemd[1]: Created slice kubepods-besteffort-pod63c0071d_d528_4a2e_98a2_f84d845df840.slice - libcontainer container kubepods-besteffort-pod63c0071d_d528_4a2e_98a2_f84d845df840.slice. Dec 16 13:18:23.347852 kubelet[3309]: I1216 13:18:23.347807 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/63c0071d-d528-4a2e-98a2-f84d845df840-node-certs\") pod \"calico-node-nt4q2\" (UID: \"63c0071d-d528-4a2e-98a2-f84d845df840\") " pod="calico-system/calico-node-nt4q2" Dec 16 13:18:23.347852 kubelet[3309]: I1216 13:18:23.347862 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63c0071d-d528-4a2e-98a2-f84d845df840-tigera-ca-bundle\") pod \"calico-node-nt4q2\" (UID: \"63c0071d-d528-4a2e-98a2-f84d845df840\") " pod="calico-system/calico-node-nt4q2" Dec 16 13:18:23.348123 kubelet[3309]: I1216 13:18:23.347891 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/63c0071d-d528-4a2e-98a2-f84d845df840-cni-log-dir\") pod \"calico-node-nt4q2\" (UID: \"63c0071d-d528-4a2e-98a2-f84d845df840\") " pod="calico-system/calico-node-nt4q2" Dec 16 13:18:23.348123 kubelet[3309]: I1216 13:18:23.347911 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/63c0071d-d528-4a2e-98a2-f84d845df840-flexvol-driver-host\") pod \"calico-node-nt4q2\" (UID: \"63c0071d-d528-4a2e-98a2-f84d845df840\") " pod="calico-system/calico-node-nt4q2" Dec 16 13:18:23.348123 kubelet[3309]: I1216 13:18:23.347938 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/63c0071d-d528-4a2e-98a2-f84d845df840-cni-bin-dir\") pod \"calico-node-nt4q2\" (UID: \"63c0071d-d528-4a2e-98a2-f84d845df840\") " pod="calico-system/calico-node-nt4q2" Dec 16 13:18:23.348123 kubelet[3309]: I1216 13:18:23.347962 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/63c0071d-d528-4a2e-98a2-f84d845df840-policysync\") pod \"calico-node-nt4q2\" (UID: \"63c0071d-d528-4a2e-98a2-f84d845df840\") " pod="calico-system/calico-node-nt4q2" Dec 16 13:18:23.348123 kubelet[3309]: I1216 13:18:23.347992 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf7th\" (UniqueName: \"kubernetes.io/projected/63c0071d-d528-4a2e-98a2-f84d845df840-kube-api-access-sf7th\") pod \"calico-node-nt4q2\" (UID: \"63c0071d-d528-4a2e-98a2-f84d845df840\") " pod="calico-system/calico-node-nt4q2" Dec 16 13:18:23.348530 kubelet[3309]: I1216 13:18:23.348015 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/63c0071d-d528-4a2e-98a2-f84d845df840-cni-net-dir\") pod \"calico-node-nt4q2\" (UID: \"63c0071d-d528-4a2e-98a2-f84d845df840\") " pod="calico-system/calico-node-nt4q2" Dec 16 13:18:23.348530 kubelet[3309]: I1216 13:18:23.348035 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/63c0071d-d528-4a2e-98a2-f84d845df840-var-run-calico\") pod \"calico-node-nt4q2\" (UID: \"63c0071d-d528-4a2e-98a2-f84d845df840\") " pod="calico-system/calico-node-nt4q2" Dec 16 13:18:23.348530 kubelet[3309]: I1216 13:18:23.348215 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63c0071d-d528-4a2e-98a2-f84d845df840-xtables-lock\") pod \"calico-node-nt4q2\" (UID: \"63c0071d-d528-4a2e-98a2-f84d845df840\") " pod="calico-system/calico-node-nt4q2" Dec 16 13:18:23.348530 kubelet[3309]: I1216 13:18:23.348293 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/63c0071d-d528-4a2e-98a2-f84d845df840-var-lib-calico\") pod \"calico-node-nt4q2\" (UID: \"63c0071d-d528-4a2e-98a2-f84d845df840\") " pod="calico-system/calico-node-nt4q2" Dec 16 13:18:23.348699 kubelet[3309]: I1216 13:18:23.348662 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63c0071d-d528-4a2e-98a2-f84d845df840-lib-modules\") pod \"calico-node-nt4q2\" (UID: \"63c0071d-d528-4a2e-98a2-f84d845df840\") " pod="calico-system/calico-node-nt4q2" Dec 16 13:18:23.429026 containerd[1997]: time="2025-12-16T13:18:23.428901022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-746b55c996-nq8dw,Uid:d59bf461-38bf-419e-aadc-5ff281769993,Namespace:calico-system,Attempt:0,}" Dec 16 13:18:23.458501 kubelet[3309]: E1216 13:18:23.458469 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.458501 kubelet[3309]: W1216 13:18:23.458495 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.464600 kubelet[3309]: E1216 13:18:23.461394 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.477501 kubelet[3309]: E1216 13:18:23.477471 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.477501 kubelet[3309]: W1216 13:18:23.477496 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.477660 kubelet[3309]: E1216 13:18:23.477516 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.487149 containerd[1997]: time="2025-12-16T13:18:23.487089446Z" level=info msg="connecting to shim aebe5bac8b3d5ae7328214e64b714366723d3fcbca18c51af40ba9478bcf35b2" address="unix:///run/containerd/s/3f7b9f015a86021b3b2b9b246b2361cfcbc9b0dab5b1fbcb195078b860e4b674" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:18:23.516335 kubelet[3309]: E1216 13:18:23.516131 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p2jzr" podUID="0d725eaf-63cb-4894-b5fe-56fa81c91e00" Dec 16 13:18:23.533525 systemd[1]: Started cri-containerd-aebe5bac8b3d5ae7328214e64b714366723d3fcbca18c51af40ba9478bcf35b2.scope - libcontainer container aebe5bac8b3d5ae7328214e64b714366723d3fcbca18c51af40ba9478bcf35b2. Dec 16 13:18:23.613481 kubelet[3309]: E1216 13:18:23.613449 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.613481 kubelet[3309]: W1216 13:18:23.613474 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.613633 kubelet[3309]: E1216 13:18:23.613495 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.613717 kubelet[3309]: E1216 13:18:23.613649 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.613717 kubelet[3309]: W1216 13:18:23.613655 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.613717 kubelet[3309]: E1216 13:18:23.613663 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.613810 kubelet[3309]: E1216 13:18:23.613798 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.613810 kubelet[3309]: W1216 13:18:23.613809 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.613862 kubelet[3309]: E1216 13:18:23.613819 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.614008 kubelet[3309]: E1216 13:18:23.613995 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.614008 kubelet[3309]: W1216 13:18:23.614007 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.614079 kubelet[3309]: E1216 13:18:23.614016 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.614196 kubelet[3309]: E1216 13:18:23.614185 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.614231 kubelet[3309]: W1216 13:18:23.614196 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.614231 kubelet[3309]: E1216 13:18:23.614203 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.614338 kubelet[3309]: E1216 13:18:23.614327 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.614362 kubelet[3309]: W1216 13:18:23.614337 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.614362 kubelet[3309]: E1216 13:18:23.614344 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.614478 kubelet[3309]: E1216 13:18:23.614467 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.614508 kubelet[3309]: W1216 13:18:23.614477 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.614508 kubelet[3309]: E1216 13:18:23.614484 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.614619 kubelet[3309]: E1216 13:18:23.614608 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.614643 kubelet[3309]: W1216 13:18:23.614618 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.614643 kubelet[3309]: E1216 13:18:23.614625 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.614776 kubelet[3309]: E1216 13:18:23.614765 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.614776 kubelet[3309]: W1216 13:18:23.614776 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.614827 kubelet[3309]: E1216 13:18:23.614782 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.614912 kubelet[3309]: E1216 13:18:23.614901 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.614937 kubelet[3309]: W1216 13:18:23.614914 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.614937 kubelet[3309]: E1216 13:18:23.614921 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.615062 kubelet[3309]: E1216 13:18:23.615040 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.615062 kubelet[3309]: W1216 13:18:23.615062 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.615122 kubelet[3309]: E1216 13:18:23.615069 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.615213 kubelet[3309]: E1216 13:18:23.615201 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.615241 kubelet[3309]: W1216 13:18:23.615212 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.615241 kubelet[3309]: E1216 13:18:23.615219 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.616335 kubelet[3309]: E1216 13:18:23.616307 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.616335 kubelet[3309]: W1216 13:18:23.616331 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.616449 kubelet[3309]: E1216 13:18:23.616347 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.616811 kubelet[3309]: E1216 13:18:23.616515 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.616811 kubelet[3309]: W1216 13:18:23.616521 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.616811 kubelet[3309]: E1216 13:18:23.616529 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.616811 kubelet[3309]: E1216 13:18:23.616656 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.616811 kubelet[3309]: W1216 13:18:23.616661 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.616811 kubelet[3309]: E1216 13:18:23.616668 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.616966 kubelet[3309]: E1216 13:18:23.616840 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.616966 kubelet[3309]: W1216 13:18:23.616848 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.616966 kubelet[3309]: E1216 13:18:23.616857 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.618085 kubelet[3309]: E1216 13:18:23.617043 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.618085 kubelet[3309]: W1216 13:18:23.617061 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.618085 kubelet[3309]: E1216 13:18:23.617076 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.618085 kubelet[3309]: E1216 13:18:23.617203 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.618085 kubelet[3309]: W1216 13:18:23.617210 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.618085 kubelet[3309]: E1216 13:18:23.617218 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.618085 kubelet[3309]: E1216 13:18:23.617339 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.618085 kubelet[3309]: W1216 13:18:23.617345 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.618085 kubelet[3309]: E1216 13:18:23.617351 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.618085 kubelet[3309]: E1216 13:18:23.617475 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.618351 kubelet[3309]: W1216 13:18:23.617481 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.618351 kubelet[3309]: E1216 13:18:23.617487 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.619410 containerd[1997]: time="2025-12-16T13:18:23.619380763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nt4q2,Uid:63c0071d-d528-4a2e-98a2-f84d845df840,Namespace:calico-system,Attempt:0,}" Dec 16 13:18:23.651707 kubelet[3309]: E1216 13:18:23.651674 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.651707 kubelet[3309]: W1216 13:18:23.651694 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.651707 kubelet[3309]: E1216 13:18:23.651714 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.651882 kubelet[3309]: I1216 13:18:23.651743 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0d725eaf-63cb-4894-b5fe-56fa81c91e00-registration-dir\") pod \"csi-node-driver-p2jzr\" (UID: \"0d725eaf-63cb-4894-b5fe-56fa81c91e00\") " pod="calico-system/csi-node-driver-p2jzr" Dec 16 13:18:23.652946 kubelet[3309]: E1216 13:18:23.652919 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.652946 kubelet[3309]: W1216 13:18:23.652937 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.653165 kubelet[3309]: E1216 13:18:23.652953 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.653165 kubelet[3309]: I1216 13:18:23.652973 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbmzf\" (UniqueName: \"kubernetes.io/projected/0d725eaf-63cb-4894-b5fe-56fa81c91e00-kube-api-access-fbmzf\") pod \"csi-node-driver-p2jzr\" (UID: \"0d725eaf-63cb-4894-b5fe-56fa81c91e00\") " pod="calico-system/csi-node-driver-p2jzr" Dec 16 13:18:23.653642 kubelet[3309]: E1216 13:18:23.653618 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.653642 kubelet[3309]: W1216 13:18:23.653633 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.653642 kubelet[3309]: E1216 13:18:23.653647 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.653777 kubelet[3309]: I1216 13:18:23.653666 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0d725eaf-63cb-4894-b5fe-56fa81c91e00-varrun\") pod \"csi-node-driver-p2jzr\" (UID: \"0d725eaf-63cb-4894-b5fe-56fa81c91e00\") " pod="calico-system/csi-node-driver-p2jzr" Dec 16 13:18:23.654122 kubelet[3309]: E1216 13:18:23.654105 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.654122 kubelet[3309]: W1216 13:18:23.654120 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.654228 kubelet[3309]: E1216 13:18:23.654132 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.654438 kubelet[3309]: I1216 13:18:23.654422 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d725eaf-63cb-4894-b5fe-56fa81c91e00-kubelet-dir\") pod \"csi-node-driver-p2jzr\" (UID: \"0d725eaf-63cb-4894-b5fe-56fa81c91e00\") " pod="calico-system/csi-node-driver-p2jzr" Dec 16 13:18:23.654615 kubelet[3309]: E1216 13:18:23.654594 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.654615 kubelet[3309]: W1216 13:18:23.654609 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.654670 kubelet[3309]: E1216 13:18:23.654621 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.655259 kubelet[3309]: E1216 13:18:23.655241 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.655259 kubelet[3309]: W1216 13:18:23.655257 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.655432 kubelet[3309]: E1216 13:18:23.655333 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.655811 kubelet[3309]: E1216 13:18:23.655795 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.655811 kubelet[3309]: W1216 13:18:23.655807 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.655900 kubelet[3309]: E1216 13:18:23.655884 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.656046 kubelet[3309]: E1216 13:18:23.656029 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.656046 kubelet[3309]: W1216 13:18:23.656041 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.656797 kubelet[3309]: E1216 13:18:23.656149 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.656797 kubelet[3309]: I1216 13:18:23.656170 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0d725eaf-63cb-4894-b5fe-56fa81c91e00-socket-dir\") pod \"csi-node-driver-p2jzr\" (UID: \"0d725eaf-63cb-4894-b5fe-56fa81c91e00\") " pod="calico-system/csi-node-driver-p2jzr" Dec 16 13:18:23.657448 kubelet[3309]: E1216 13:18:23.657431 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.657448 kubelet[3309]: W1216 13:18:23.657446 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.657577 kubelet[3309]: E1216 13:18:23.657562 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.657638 kubelet[3309]: E1216 13:18:23.657626 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.657638 kubelet[3309]: W1216 13:18:23.657637 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.657824 kubelet[3309]: E1216 13:18:23.657808 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.657877 kubelet[3309]: E1216 13:18:23.657861 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.657877 kubelet[3309]: W1216 13:18:23.657869 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.657923 kubelet[3309]: E1216 13:18:23.657878 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.658213 kubelet[3309]: E1216 13:18:23.658198 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.658254 kubelet[3309]: W1216 13:18:23.658219 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.658254 kubelet[3309]: E1216 13:18:23.658233 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.658727 kubelet[3309]: E1216 13:18:23.658710 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.658727 kubelet[3309]: W1216 13:18:23.658724 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.658800 kubelet[3309]: E1216 13:18:23.658734 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.659073 kubelet[3309]: E1216 13:18:23.658961 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.659073 kubelet[3309]: W1216 13:18:23.658971 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.659073 kubelet[3309]: E1216 13:18:23.658980 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.660034 kubelet[3309]: E1216 13:18:23.659460 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.660034 kubelet[3309]: W1216 13:18:23.659473 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.660034 kubelet[3309]: E1216 13:18:23.659484 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.660405 containerd[1997]: time="2025-12-16T13:18:23.660032865Z" level=info msg="connecting to shim c0be44b6b4cacd03ce53b6d0cb40c3328c0485b86c71c53e5deb549d6271825e" address="unix:///run/containerd/s/84e8ed16ddba4bbd59f7bcb7bba6a8cef14da334a515300434f13233da0caac6" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:18:23.705290 systemd[1]: Started cri-containerd-c0be44b6b4cacd03ce53b6d0cb40c3328c0485b86c71c53e5deb549d6271825e.scope - libcontainer container c0be44b6b4cacd03ce53b6d0cb40c3328c0485b86c71c53e5deb549d6271825e. Dec 16 13:18:23.756240 containerd[1997]: time="2025-12-16T13:18:23.756182193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-746b55c996-nq8dw,Uid:d59bf461-38bf-419e-aadc-5ff281769993,Namespace:calico-system,Attempt:0,} returns sandbox id \"aebe5bac8b3d5ae7328214e64b714366723d3fcbca18c51af40ba9478bcf35b2\"" Dec 16 13:18:23.760038 containerd[1997]: time="2025-12-16T13:18:23.760001649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 13:18:23.760732 kubelet[3309]: E1216 13:18:23.760683 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.760732 kubelet[3309]: W1216 13:18:23.760703 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.760732 kubelet[3309]: E1216 13:18:23.760724 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.761756 kubelet[3309]: E1216 13:18:23.761455 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.761756 kubelet[3309]: W1216 13:18:23.761471 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.762498 kubelet[3309]: E1216 13:18:23.762471 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.762766 kubelet[3309]: E1216 13:18:23.762593 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.762766 kubelet[3309]: W1216 13:18:23.762605 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.762766 kubelet[3309]: E1216 13:18:23.762635 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.763607 kubelet[3309]: E1216 13:18:23.762861 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.763607 kubelet[3309]: W1216 13:18:23.762872 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.763607 kubelet[3309]: E1216 13:18:23.762898 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.763607 kubelet[3309]: E1216 13:18:23.763130 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.763607 kubelet[3309]: W1216 13:18:23.763142 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.763607 kubelet[3309]: E1216 13:18:23.763168 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.764526 kubelet[3309]: E1216 13:18:23.763671 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.764526 kubelet[3309]: W1216 13:18:23.763683 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.764526 kubelet[3309]: E1216 13:18:23.763722 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.764526 kubelet[3309]: E1216 13:18:23.764312 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.764526 kubelet[3309]: W1216 13:18:23.764335 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.764526 kubelet[3309]: E1216 13:18:23.764515 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.765681 kubelet[3309]: E1216 13:18:23.765240 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.765681 kubelet[3309]: W1216 13:18:23.765252 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.765681 kubelet[3309]: E1216 13:18:23.765294 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.765681 kubelet[3309]: E1216 13:18:23.765553 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.765681 kubelet[3309]: W1216 13:18:23.765563 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.765681 kubelet[3309]: E1216 13:18:23.765598 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.767035 kubelet[3309]: E1216 13:18:23.766397 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.767035 kubelet[3309]: W1216 13:18:23.766415 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.767035 kubelet[3309]: E1216 13:18:23.766739 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.767548 kubelet[3309]: E1216 13:18:23.767529 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.767548 kubelet[3309]: W1216 13:18:23.767547 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.768241 kubelet[3309]: E1216 13:18:23.768077 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.768646 kubelet[3309]: E1216 13:18:23.768611 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.768646 kubelet[3309]: W1216 13:18:23.768625 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.768747 kubelet[3309]: E1216 13:18:23.768713 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.769435 kubelet[3309]: E1216 13:18:23.769304 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.769435 kubelet[3309]: W1216 13:18:23.769328 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.769974 kubelet[3309]: E1216 13:18:23.769948 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.770265 kubelet[3309]: E1216 13:18:23.770247 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.770344 kubelet[3309]: W1216 13:18:23.770265 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.770629 kubelet[3309]: E1216 13:18:23.770459 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.770952 kubelet[3309]: E1216 13:18:23.770831 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.770952 kubelet[3309]: W1216 13:18:23.770851 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.771390 kubelet[3309]: E1216 13:18:23.771001 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.771943 kubelet[3309]: E1216 13:18:23.771611 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.771943 kubelet[3309]: W1216 13:18:23.771628 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.771943 kubelet[3309]: E1216 13:18:23.771821 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.772272 kubelet[3309]: E1216 13:18:23.772258 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.772458 kubelet[3309]: W1216 13:18:23.772442 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.772596 kubelet[3309]: E1216 13:18:23.772576 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.772964 kubelet[3309]: E1216 13:18:23.772949 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.773202 kubelet[3309]: W1216 13:18:23.773094 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.773419 kubelet[3309]: E1216 13:18:23.773125 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.773534 kubelet[3309]: E1216 13:18:23.773518 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.773585 kubelet[3309]: W1216 13:18:23.773550 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.773726 kubelet[3309]: E1216 13:18:23.773700 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.774173 kubelet[3309]: E1216 13:18:23.774155 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.774173 kubelet[3309]: W1216 13:18:23.774172 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.774291 kubelet[3309]: E1216 13:18:23.774192 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.774485 kubelet[3309]: E1216 13:18:23.774467 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.774485 kubelet[3309]: W1216 13:18:23.774481 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.774630 kubelet[3309]: E1216 13:18:23.774520 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.775185 kubelet[3309]: E1216 13:18:23.775151 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.775263 kubelet[3309]: W1216 13:18:23.775186 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.775312 kubelet[3309]: E1216 13:18:23.775272 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.775505 kubelet[3309]: E1216 13:18:23.775490 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.775573 kubelet[3309]: W1216 13:18:23.775506 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.775709 kubelet[3309]: E1216 13:18:23.775694 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.775864 kubelet[3309]: E1216 13:18:23.775774 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.775864 kubelet[3309]: W1216 13:18:23.775786 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.775864 kubelet[3309]: E1216 13:18:23.775810 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.776207 kubelet[3309]: E1216 13:18:23.776156 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.776207 kubelet[3309]: W1216 13:18:23.776172 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.776207 kubelet[3309]: E1216 13:18:23.776186 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:23.776528 containerd[1997]: time="2025-12-16T13:18:23.776180775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nt4q2,Uid:63c0071d-d528-4a2e-98a2-f84d845df840,Namespace:calico-system,Attempt:0,} returns sandbox id \"c0be44b6b4cacd03ce53b6d0cb40c3328c0485b86c71c53e5deb549d6271825e\"" Dec 16 13:18:23.797389 kubelet[3309]: E1216 13:18:23.797300 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:23.797389 kubelet[3309]: W1216 13:18:23.797324 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:23.797389 kubelet[3309]: E1216 13:18:23.797348 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:25.060496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3775320300.mount: Deactivated successfully. Dec 16 13:18:25.575749 kubelet[3309]: E1216 13:18:25.575676 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p2jzr" podUID="0d725eaf-63cb-4894-b5fe-56fa81c91e00" Dec 16 13:18:26.609781 containerd[1997]: time="2025-12-16T13:18:26.609711878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:18:26.611740 containerd[1997]: time="2025-12-16T13:18:26.611532311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Dec 16 13:18:26.614000 containerd[1997]: time="2025-12-16T13:18:26.613958530Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:18:26.617088 containerd[1997]: time="2025-12-16T13:18:26.617041031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:18:26.617748 containerd[1997]: time="2025-12-16T13:18:26.617545791Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.857502473s" Dec 16 13:18:26.617748 containerd[1997]: time="2025-12-16T13:18:26.617577728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 13:18:26.618819 containerd[1997]: time="2025-12-16T13:18:26.618790834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 13:18:26.633623 containerd[1997]: time="2025-12-16T13:18:26.633583680Z" level=info msg="CreateContainer within sandbox \"aebe5bac8b3d5ae7328214e64b714366723d3fcbca18c51af40ba9478bcf35b2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 13:18:26.669850 containerd[1997]: time="2025-12-16T13:18:26.669640986Z" level=info msg="Container 5122fbd54f63287cc7a3fa16309ddae837d1201b83f1a1e23cc9a31fd8e623b4: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:18:26.685573 containerd[1997]: time="2025-12-16T13:18:26.685530389Z" level=info msg="CreateContainer within sandbox \"aebe5bac8b3d5ae7328214e64b714366723d3fcbca18c51af40ba9478bcf35b2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5122fbd54f63287cc7a3fa16309ddae837d1201b83f1a1e23cc9a31fd8e623b4\"" Dec 16 13:18:26.686299 containerd[1997]: time="2025-12-16T13:18:26.686223403Z" level=info msg="StartContainer for \"5122fbd54f63287cc7a3fa16309ddae837d1201b83f1a1e23cc9a31fd8e623b4\"" Dec 16 13:18:26.687239 containerd[1997]: time="2025-12-16T13:18:26.687214022Z" level=info msg="connecting to shim 5122fbd54f63287cc7a3fa16309ddae837d1201b83f1a1e23cc9a31fd8e623b4" address="unix:///run/containerd/s/3f7b9f015a86021b3b2b9b246b2361cfcbc9b0dab5b1fbcb195078b860e4b674" protocol=ttrpc version=3 Dec 16 13:18:26.711284 systemd[1]: Started cri-containerd-5122fbd54f63287cc7a3fa16309ddae837d1201b83f1a1e23cc9a31fd8e623b4.scope - libcontainer container 5122fbd54f63287cc7a3fa16309ddae837d1201b83f1a1e23cc9a31fd8e623b4. Dec 16 13:18:26.771070 containerd[1997]: time="2025-12-16T13:18:26.771002269Z" level=info msg="StartContainer for \"5122fbd54f63287cc7a3fa16309ddae837d1201b83f1a1e23cc9a31fd8e623b4\" returns successfully" Dec 16 13:18:26.941471 kubelet[3309]: E1216 13:18:26.941297 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:26.941471 kubelet[3309]: W1216 13:18:26.941322 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:26.941471 kubelet[3309]: E1216 13:18:26.941349 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:26.943330 kubelet[3309]: E1216 13:18:26.942988 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:26.943330 kubelet[3309]: W1216 13:18:26.943010 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:26.943330 kubelet[3309]: E1216 13:18:26.943093 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:26.943655 kubelet[3309]: E1216 13:18:26.943486 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:26.943931 kubelet[3309]: W1216 13:18:26.943726 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:26.943931 kubelet[3309]: E1216 13:18:26.943753 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:26.944321 kubelet[3309]: E1216 13:18:26.944228 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:26.944321 kubelet[3309]: W1216 13:18:26.944258 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:26.944321 kubelet[3309]: E1216 13:18:26.944274 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:26.944733 kubelet[3309]: E1216 13:18:26.944708 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:26.944905 kubelet[3309]: W1216 13:18:26.944832 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:26.944905 kubelet[3309]: E1216 13:18:26.944854 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:26.945347 kubelet[3309]: E1216 13:18:26.945316 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:26.945902 kubelet[3309]: W1216 13:18:26.945821 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:26.945902 kubelet[3309]: E1216 13:18:26.945841 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:26.946310 kubelet[3309]: E1216 13:18:26.946285 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:26.946490 kubelet[3309]: W1216 13:18:26.946414 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:26.946490 kubelet[3309]: E1216 13:18:26.946435 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:26.946823 kubelet[3309]: E1216 13:18:26.946813 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:26.946968 kubelet[3309]: W1216 13:18:26.946855 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:26.946968 kubelet[3309]: E1216 13:18:26.946870 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:26.947711 kubelet[3309]: E1216 13:18:26.947676 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:26.947900 kubelet[3309]: W1216 13:18:26.947828 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:26.947900 kubelet[3309]: E1216 13:18:26.947861 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:26.948419 kubelet[3309]: E1216 13:18:26.948285 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:26.948608 kubelet[3309]: W1216 13:18:26.948299 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:26.948800 kubelet[3309]: E1216 13:18:26.948680 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:26.949064 kubelet[3309]: E1216 13:18:26.949028 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:26.949064 kubelet[3309]: W1216 13:18:26.949041 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:26.949384 kubelet[3309]: E1216 13:18:26.949277 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:26.949725 kubelet[3309]: E1216 13:18:26.949700 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:26.950002 kubelet[3309]: W1216 13:18:26.949902 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:26.950002 kubelet[3309]: E1216 13:18:26.949920 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:26.950486 kubelet[3309]: E1216 13:18:26.950458 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:26.950785 kubelet[3309]: W1216 13:18:26.950688 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:26.950785 kubelet[3309]: E1216 13:18:26.950708 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:26.951461 kubelet[3309]: E1216 13:18:26.951365 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:26.951715 kubelet[3309]: W1216 13:18:26.951549 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:26.951715 kubelet[3309]: E1216 13:18:26.951570 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:26.952179 kubelet[3309]: E1216 13:18:26.952143 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:26.952581 kubelet[3309]: W1216 13:18:26.952332 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:26.952581 kubelet[3309]: E1216 13:18:26.952355 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:26.996720 kubelet[3309]: E1216 13:18:26.996676 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:26.996939 kubelet[3309]: W1216 13:18:26.996869 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:26.996939 kubelet[3309]: E1216 13:18:26.996900 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:26.997442 kubelet[3309]: E1216 13:18:26.997408 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:26.997442 kubelet[3309]: W1216 13:18:26.997424 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:26.997883 kubelet[3309]: E1216 13:18:26.997624 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:26.998080 kubelet[3309]: E1216 13:18:26.998067 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:26.998168 kubelet[3309]: W1216 13:18:26.998156 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:26.998252 kubelet[3309]: E1216 13:18:26.998240 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:26.998570 kubelet[3309]: E1216 13:18:26.998482 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:26.998777 kubelet[3309]: W1216 13:18:26.998642 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:26.998777 kubelet[3309]: E1216 13:18:26.998661 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.000554 kubelet[3309]: E1216 13:18:26.999302 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.000554 kubelet[3309]: W1216 13:18:27.000408 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.000554 kubelet[3309]: E1216 13:18:27.000428 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.002090 kubelet[3309]: E1216 13:18:27.001396 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.002090 kubelet[3309]: W1216 13:18:27.001412 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.002090 kubelet[3309]: E1216 13:18:27.001450 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.003910 kubelet[3309]: E1216 13:18:27.003654 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.005240 kubelet[3309]: W1216 13:18:27.005098 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.005445 kubelet[3309]: E1216 13:18:27.005432 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.006097 kubelet[3309]: W1216 13:18:27.005627 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.006097 kubelet[3309]: E1216 13:18:27.005589 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.006097 kubelet[3309]: E1216 13:18:27.005729 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.006549 kubelet[3309]: E1216 13:18:27.006532 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.006612 kubelet[3309]: W1216 13:18:27.006550 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.006612 kubelet[3309]: E1216 13:18:27.006598 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.007865 kubelet[3309]: E1216 13:18:27.007834 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.007865 kubelet[3309]: W1216 13:18:27.007860 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.008195 kubelet[3309]: E1216 13:18:27.007969 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.008462 kubelet[3309]: E1216 13:18:27.008236 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.008462 kubelet[3309]: W1216 13:18:27.008254 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.008462 kubelet[3309]: E1216 13:18:27.008290 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.008622 kubelet[3309]: E1216 13:18:27.008562 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.008622 kubelet[3309]: W1216 13:18:27.008573 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.008622 kubelet[3309]: E1216 13:18:27.008599 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.009024 kubelet[3309]: E1216 13:18:27.009008 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.009024 kubelet[3309]: W1216 13:18:27.009024 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.009524 kubelet[3309]: E1216 13:18:27.009041 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.010388 kubelet[3309]: E1216 13:18:27.010368 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.010388 kubelet[3309]: W1216 13:18:27.010388 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.010516 kubelet[3309]: E1216 13:18:27.010408 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.010828 kubelet[3309]: E1216 13:18:27.010779 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.010828 kubelet[3309]: W1216 13:18:27.010797 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.010828 kubelet[3309]: E1216 13:18:27.010813 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.011425 kubelet[3309]: E1216 13:18:27.011393 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.011425 kubelet[3309]: W1216 13:18:27.011411 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.011651 kubelet[3309]: E1216 13:18:27.011495 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.012571 kubelet[3309]: E1216 13:18:27.012459 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.012571 kubelet[3309]: W1216 13:18:27.012477 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.012571 kubelet[3309]: E1216 13:18:27.012520 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.013255 kubelet[3309]: E1216 13:18:27.013237 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.013255 kubelet[3309]: W1216 13:18:27.013254 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.013377 kubelet[3309]: E1216 13:18:27.013269 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.573831 kubelet[3309]: E1216 13:18:27.571935 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p2jzr" podUID="0d725eaf-63cb-4894-b5fe-56fa81c91e00" Dec 16 13:18:27.838550 containerd[1997]: time="2025-12-16T13:18:27.838411338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:18:27.840852 containerd[1997]: time="2025-12-16T13:18:27.840640153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Dec 16 13:18:27.843164 containerd[1997]: time="2025-12-16T13:18:27.843116101Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:18:27.845963 containerd[1997]: time="2025-12-16T13:18:27.845905307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:18:27.846699 containerd[1997]: time="2025-12-16T13:18:27.846645871Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.227825856s" Dec 16 13:18:27.846699 containerd[1997]: time="2025-12-16T13:18:27.846684073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 13:18:27.848980 containerd[1997]: time="2025-12-16T13:18:27.848953996Z" level=info msg="CreateContainer within sandbox \"c0be44b6b4cacd03ce53b6d0cb40c3328c0485b86c71c53e5deb549d6271825e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 13:18:27.877896 containerd[1997]: time="2025-12-16T13:18:27.869246673Z" level=info msg="Container 7e0edbf07c1cd8a64163bde76796ab1ea717b0b7c8caa53e2a0f925a53d2403b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:18:27.892092 containerd[1997]: time="2025-12-16T13:18:27.892040963Z" level=info msg="CreateContainer within sandbox \"c0be44b6b4cacd03ce53b6d0cb40c3328c0485b86c71c53e5deb549d6271825e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7e0edbf07c1cd8a64163bde76796ab1ea717b0b7c8caa53e2a0f925a53d2403b\"" Dec 16 13:18:27.892769 containerd[1997]: time="2025-12-16T13:18:27.892705666Z" level=info msg="StartContainer for \"7e0edbf07c1cd8a64163bde76796ab1ea717b0b7c8caa53e2a0f925a53d2403b\"" Dec 16 13:18:27.895522 containerd[1997]: time="2025-12-16T13:18:27.895496048Z" level=info msg="connecting to shim 7e0edbf07c1cd8a64163bde76796ab1ea717b0b7c8caa53e2a0f925a53d2403b" address="unix:///run/containerd/s/84e8ed16ddba4bbd59f7bcb7bba6a8cef14da334a515300434f13233da0caac6" protocol=ttrpc version=3 Dec 16 13:18:27.923296 systemd[1]: Started cri-containerd-7e0edbf07c1cd8a64163bde76796ab1ea717b0b7c8caa53e2a0f925a53d2403b.scope - libcontainer container 7e0edbf07c1cd8a64163bde76796ab1ea717b0b7c8caa53e2a0f925a53d2403b. Dec 16 13:18:27.948477 kubelet[3309]: I1216 13:18:27.948393 3309 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:18:27.956995 kubelet[3309]: E1216 13:18:27.956962 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.956995 kubelet[3309]: W1216 13:18:27.956987 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.957167 kubelet[3309]: E1216 13:18:27.957008 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.957300 kubelet[3309]: E1216 13:18:27.957278 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.957336 kubelet[3309]: W1216 13:18:27.957291 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.957336 kubelet[3309]: E1216 13:18:27.957315 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.957526 kubelet[3309]: E1216 13:18:27.957462 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.957526 kubelet[3309]: W1216 13:18:27.957469 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.957526 kubelet[3309]: E1216 13:18:27.957476 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.957641 kubelet[3309]: E1216 13:18:27.957625 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.957641 kubelet[3309]: W1216 13:18:27.957635 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.957690 kubelet[3309]: E1216 13:18:27.957642 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.957844 kubelet[3309]: E1216 13:18:27.957830 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.957844 kubelet[3309]: W1216 13:18:27.957841 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.957895 kubelet[3309]: E1216 13:18:27.957849 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.958028 kubelet[3309]: E1216 13:18:27.958014 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.958028 kubelet[3309]: W1216 13:18:27.958025 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.958118 kubelet[3309]: E1216 13:18:27.958040 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.958518 kubelet[3309]: E1216 13:18:27.958218 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.958518 kubelet[3309]: W1216 13:18:27.958227 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.958518 kubelet[3309]: E1216 13:18:27.958235 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.958518 kubelet[3309]: E1216 13:18:27.958407 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.958518 kubelet[3309]: W1216 13:18:27.958413 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.958518 kubelet[3309]: E1216 13:18:27.958420 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.958687 kubelet[3309]: E1216 13:18:27.958582 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.958687 kubelet[3309]: W1216 13:18:27.958588 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.958687 kubelet[3309]: E1216 13:18:27.958594 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.958757 kubelet[3309]: E1216 13:18:27.958753 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.958784 kubelet[3309]: W1216 13:18:27.958759 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.958784 kubelet[3309]: E1216 13:18:27.958765 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.959371 kubelet[3309]: E1216 13:18:27.959261 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.959371 kubelet[3309]: W1216 13:18:27.959274 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.959371 kubelet[3309]: E1216 13:18:27.959287 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.959483 kubelet[3309]: E1216 13:18:27.959458 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.959483 kubelet[3309]: W1216 13:18:27.959465 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.959483 kubelet[3309]: E1216 13:18:27.959471 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.959883 kubelet[3309]: E1216 13:18:27.959631 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.959883 kubelet[3309]: W1216 13:18:27.959651 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.959883 kubelet[3309]: E1216 13:18:27.959658 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.959883 kubelet[3309]: E1216 13:18:27.959809 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.959883 kubelet[3309]: W1216 13:18:27.959814 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.959883 kubelet[3309]: E1216 13:18:27.959824 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:27.960045 kubelet[3309]: E1216 13:18:27.959987 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:27.960045 kubelet[3309]: W1216 13:18:27.959993 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:27.960045 kubelet[3309]: E1216 13:18:27.960000 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:28.009300 containerd[1997]: time="2025-12-16T13:18:28.009178381Z" level=info msg="StartContainer for \"7e0edbf07c1cd8a64163bde76796ab1ea717b0b7c8caa53e2a0f925a53d2403b\" returns successfully" Dec 16 13:18:28.013969 kubelet[3309]: E1216 13:18:28.013936 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:28.014128 kubelet[3309]: W1216 13:18:28.014099 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:28.014225 kubelet[3309]: E1216 13:18:28.014209 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:28.015006 kubelet[3309]: E1216 13:18:28.014967 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:28.015006 kubelet[3309]: W1216 13:18:28.014996 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:28.015202 kubelet[3309]: E1216 13:18:28.015105 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:28.015605 kubelet[3309]: E1216 13:18:28.015575 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:28.015605 kubelet[3309]: W1216 13:18:28.015591 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:28.015605 kubelet[3309]: E1216 13:18:28.015607 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:28.016004 kubelet[3309]: E1216 13:18:28.015988 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:28.016004 kubelet[3309]: W1216 13:18:28.016003 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:28.016104 kubelet[3309]: E1216 13:18:28.016085 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:28.016238 kubelet[3309]: E1216 13:18:28.016215 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:28.016238 kubelet[3309]: W1216 13:18:28.016226 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:28.017160 kubelet[3309]: E1216 13:18:28.016391 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:28.017160 kubelet[3309]: E1216 13:18:28.016530 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:28.017160 kubelet[3309]: W1216 13:18:28.016537 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:28.017160 kubelet[3309]: E1216 13:18:28.016556 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:28.017160 kubelet[3309]: E1216 13:18:28.016714 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:28.017160 kubelet[3309]: W1216 13:18:28.016719 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:28.017160 kubelet[3309]: E1216 13:18:28.016735 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:28.017160 kubelet[3309]: E1216 13:18:28.016876 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:28.017160 kubelet[3309]: W1216 13:18:28.016883 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:28.017160 kubelet[3309]: E1216 13:18:28.016901 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:28.017530 kubelet[3309]: E1216 13:18:28.017087 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:28.017530 kubelet[3309]: W1216 13:18:28.017094 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:28.017530 kubelet[3309]: E1216 13:18:28.017111 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:28.017625 kubelet[3309]: E1216 13:18:28.017575 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:28.017625 kubelet[3309]: W1216 13:18:28.017583 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:28.017625 kubelet[3309]: E1216 13:18:28.017595 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:28.018167 kubelet[3309]: E1216 13:18:28.017718 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:28.018167 kubelet[3309]: W1216 13:18:28.017724 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:28.018167 kubelet[3309]: E1216 13:18:28.017797 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:28.018167 kubelet[3309]: E1216 13:18:28.017898 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:28.018167 kubelet[3309]: W1216 13:18:28.017903 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:28.018167 kubelet[3309]: E1216 13:18:28.017974 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:28.018167 kubelet[3309]: E1216 13:18:28.018082 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:28.018167 kubelet[3309]: W1216 13:18:28.018088 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:28.018167 kubelet[3309]: E1216 13:18:28.018106 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:28.018541 kubelet[3309]: E1216 13:18:28.018282 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:28.018541 kubelet[3309]: W1216 13:18:28.018288 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:28.018541 kubelet[3309]: E1216 13:18:28.018304 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:28.018832 kubelet[3309]: E1216 13:18:28.018620 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:28.018832 kubelet[3309]: W1216 13:18:28.018627 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:28.018832 kubelet[3309]: E1216 13:18:28.018638 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:28.018832 kubelet[3309]: E1216 13:18:28.018796 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:28.018832 kubelet[3309]: W1216 13:18:28.018803 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:28.018832 kubelet[3309]: E1216 13:18:28.018810 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:28.019520 kubelet[3309]: E1216 13:18:28.018950 3309 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:18:28.019520 kubelet[3309]: W1216 13:18:28.018956 3309 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:18:28.019520 kubelet[3309]: E1216 13:18:28.018962 3309 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:18:28.021446 systemd[1]: cri-containerd-7e0edbf07c1cd8a64163bde76796ab1ea717b0b7c8caa53e2a0f925a53d2403b.scope: Deactivated successfully. Dec 16 13:18:28.051421 containerd[1997]: time="2025-12-16T13:18:28.051356868Z" level=info msg="received container exit event container_id:\"7e0edbf07c1cd8a64163bde76796ab1ea717b0b7c8caa53e2a0f925a53d2403b\" id:\"7e0edbf07c1cd8a64163bde76796ab1ea717b0b7c8caa53e2a0f925a53d2403b\" pid:4085 exited_at:{seconds:1765891108 nanos:25118975}" Dec 16 13:18:28.092553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e0edbf07c1cd8a64163bde76796ab1ea717b0b7c8caa53e2a0f925a53d2403b-rootfs.mount: Deactivated successfully. Dec 16 13:18:28.955350 containerd[1997]: time="2025-12-16T13:18:28.955159080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 13:18:28.978963 kubelet[3309]: I1216 13:18:28.978905 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-746b55c996-nq8dw" podStartSLOduration=3.119107078 podStartE2EDuration="5.978888023s" podCreationTimestamp="2025-12-16 13:18:23 +0000 UTC" firstStartedPulling="2025-12-16 13:18:23.758689903 +0000 UTC m=+22.684304089" lastFinishedPulling="2025-12-16 13:18:26.61847085 +0000 UTC m=+25.544085034" observedRunningTime="2025-12-16 13:18:27.004406965 +0000 UTC m=+25.930021166" watchObservedRunningTime="2025-12-16 13:18:28.978888023 +0000 UTC m=+27.904502235" Dec 16 13:18:29.573092 kubelet[3309]: E1216 13:18:29.572104 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p2jzr" podUID="0d725eaf-63cb-4894-b5fe-56fa81c91e00" Dec 16 13:18:31.572146 kubelet[3309]: E1216 13:18:31.572075 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p2jzr" podUID="0d725eaf-63cb-4894-b5fe-56fa81c91e00" Dec 16 13:18:33.185838 containerd[1997]: time="2025-12-16T13:18:33.185782533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:18:33.186956 containerd[1997]: time="2025-12-16T13:18:33.186770728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Dec 16 13:18:33.188185 containerd[1997]: time="2025-12-16T13:18:33.188149091Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:18:33.191691 containerd[1997]: time="2025-12-16T13:18:33.191471291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:18:33.192147 containerd[1997]: time="2025-12-16T13:18:33.192119233Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.236761526s" Dec 16 13:18:33.192233 containerd[1997]: time="2025-12-16T13:18:33.192157592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 13:18:33.196463 containerd[1997]: time="2025-12-16T13:18:33.196422385Z" level=info msg="CreateContainer within sandbox \"c0be44b6b4cacd03ce53b6d0cb40c3328c0485b86c71c53e5deb549d6271825e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 13:18:33.206080 containerd[1997]: time="2025-12-16T13:18:33.206023425Z" level=info msg="Container 7c160cd56605253bbd67df4acbab62379e6cf786a10062bba80ba4b657ee4393: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:18:33.222177 containerd[1997]: time="2025-12-16T13:18:33.222124627Z" level=info msg="CreateContainer within sandbox \"c0be44b6b4cacd03ce53b6d0cb40c3328c0485b86c71c53e5deb549d6271825e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7c160cd56605253bbd67df4acbab62379e6cf786a10062bba80ba4b657ee4393\"" Dec 16 13:18:33.223251 containerd[1997]: time="2025-12-16T13:18:33.223175321Z" level=info msg="StartContainer for \"7c160cd56605253bbd67df4acbab62379e6cf786a10062bba80ba4b657ee4393\"" Dec 16 13:18:33.225473 containerd[1997]: time="2025-12-16T13:18:33.225428066Z" level=info msg="connecting to shim 7c160cd56605253bbd67df4acbab62379e6cf786a10062bba80ba4b657ee4393" address="unix:///run/containerd/s/84e8ed16ddba4bbd59f7bcb7bba6a8cef14da334a515300434f13233da0caac6" protocol=ttrpc version=3 Dec 16 13:18:33.255279 systemd[1]: Started cri-containerd-7c160cd56605253bbd67df4acbab62379e6cf786a10062bba80ba4b657ee4393.scope - libcontainer container 7c160cd56605253bbd67df4acbab62379e6cf786a10062bba80ba4b657ee4393. Dec 16 13:18:33.391046 containerd[1997]: time="2025-12-16T13:18:33.390988047Z" level=info msg="StartContainer for \"7c160cd56605253bbd67df4acbab62379e6cf786a10062bba80ba4b657ee4393\" returns successfully" Dec 16 13:18:33.576387 kubelet[3309]: E1216 13:18:33.575278 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p2jzr" podUID="0d725eaf-63cb-4894-b5fe-56fa81c91e00" Dec 16 13:18:34.489146 systemd[1]: cri-containerd-7c160cd56605253bbd67df4acbab62379e6cf786a10062bba80ba4b657ee4393.scope: Deactivated successfully. Dec 16 13:18:34.489510 systemd[1]: cri-containerd-7c160cd56605253bbd67df4acbab62379e6cf786a10062bba80ba4b657ee4393.scope: Consumed 614ms CPU time, 162.4M memory peak, 6.4M read from disk, 171.3M written to disk. Dec 16 13:18:34.496556 containerd[1997]: time="2025-12-16T13:18:34.496509382Z" level=info msg="received container exit event container_id:\"7c160cd56605253bbd67df4acbab62379e6cf786a10062bba80ba4b657ee4393\" id:\"7c160cd56605253bbd67df4acbab62379e6cf786a10062bba80ba4b657ee4393\" pid:4181 exited_at:{seconds:1765891114 nanos:495192979}" Dec 16 13:18:34.600049 kubelet[3309]: I1216 13:18:34.599816 3309 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 13:18:34.642514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c160cd56605253bbd67df4acbab62379e6cf786a10062bba80ba4b657ee4393-rootfs.mount: Deactivated successfully. Dec 16 13:18:34.682792 kubelet[3309]: I1216 13:18:34.682716 3309 status_manager.go:890] "Failed to get status for pod" podUID="0841f344-51c4-439d-ae21-35f7829c767b" pod="kube-system/coredns-668d6bf9bc-99pm5" err="pods \"coredns-668d6bf9bc-99pm5\" is forbidden: User \"system:node:ip-172-31-26-5\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-26-5' and this object" Dec 16 13:18:34.694610 systemd[1]: Created slice kubepods-burstable-pod0841f344_51c4_439d_ae21_35f7829c767b.slice - libcontainer container kubepods-burstable-pod0841f344_51c4_439d_ae21_35f7829c767b.slice. Dec 16 13:18:34.715035 systemd[1]: Created slice kubepods-burstable-pod2dadeccb_8701_4c2b_8c6d_546d297b7e36.slice - libcontainer container kubepods-burstable-pod2dadeccb_8701_4c2b_8c6d_546d297b7e36.slice. Dec 16 13:18:34.726096 systemd[1]: Created slice kubepods-besteffort-pod51551839_7827_49fc_85bc_04ba08e6c0fe.slice - libcontainer container kubepods-besteffort-pod51551839_7827_49fc_85bc_04ba08e6c0fe.slice. Dec 16 13:18:34.733037 kubelet[3309]: W1216 13:18:34.732277 3309 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ip-172-31-26-5" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-26-5' and this object Dec 16 13:18:34.735245 kubelet[3309]: E1216 13:18:34.735191 3309 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ip-172-31-26-5\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-26-5' and this object" logger="UnhandledError" Dec 16 13:18:34.737768 systemd[1]: Created slice kubepods-besteffort-podd6bb23e0_1cd2_43cb_848c_d51c96ac4611.slice - libcontainer container kubepods-besteffort-podd6bb23e0_1cd2_43cb_848c_d51c96ac4611.slice. Dec 16 13:18:34.750317 systemd[1]: Created slice kubepods-besteffort-podf26b9dbd_da7d_4346_92a8_22a4fd24016d.slice - libcontainer container kubepods-besteffort-podf26b9dbd_da7d_4346_92a8_22a4fd24016d.slice. Dec 16 13:18:34.760123 systemd[1]: Created slice kubepods-besteffort-podfa364a9d_bf3b_4e47_9ccb_93bdce84b381.slice - libcontainer container kubepods-besteffort-podfa364a9d_bf3b_4e47_9ccb_93bdce84b381.slice. Dec 16 13:18:34.774488 systemd[1]: Created slice kubepods-besteffort-pod5a4c47ac_d579_4a1f_ba5b_fd9d5a3cbce4.slice - libcontainer container kubepods-besteffort-pod5a4c47ac_d579_4a1f_ba5b_fd9d5a3cbce4.slice. Dec 16 13:18:34.785105 kubelet[3309]: I1216 13:18:34.784574 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55kdg\" (UniqueName: \"kubernetes.io/projected/f26b9dbd-da7d-4346-92a8-22a4fd24016d-kube-api-access-55kdg\") pod \"calico-apiserver-7c88dfcf94-hn4df\" (UID: \"f26b9dbd-da7d-4346-92a8-22a4fd24016d\") " pod="calico-apiserver/calico-apiserver-7c88dfcf94-hn4df" Dec 16 13:18:34.785105 kubelet[3309]: I1216 13:18:34.784618 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa364a9d-bf3b-4e47-9ccb-93bdce84b381-goldmane-ca-bundle\") pod \"goldmane-666569f655-kcbpt\" (UID: \"fa364a9d-bf3b-4e47-9ccb-93bdce84b381\") " pod="calico-system/goldmane-666569f655-kcbpt" Dec 16 13:18:34.785105 kubelet[3309]: I1216 13:18:34.784644 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h5xj\" (UniqueName: \"kubernetes.io/projected/5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4-kube-api-access-8h5xj\") pod \"calico-kube-controllers-5d6c9b69bb-zds8r\" (UID: \"5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4\") " pod="calico-system/calico-kube-controllers-5d6c9b69bb-zds8r" Dec 16 13:18:34.785105 kubelet[3309]: I1216 13:18:34.784665 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqsm5\" (UniqueName: \"kubernetes.io/projected/2dadeccb-8701-4c2b-8c6d-546d297b7e36-kube-api-access-rqsm5\") pod \"coredns-668d6bf9bc-9b4vq\" (UID: \"2dadeccb-8701-4c2b-8c6d-546d297b7e36\") " pod="kube-system/coredns-668d6bf9bc-9b4vq" Dec 16 13:18:34.785105 kubelet[3309]: I1216 13:18:34.784684 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4-tigera-ca-bundle\") pod \"calico-kube-controllers-5d6c9b69bb-zds8r\" (UID: \"5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4\") " pod="calico-system/calico-kube-controllers-5d6c9b69bb-zds8r" Dec 16 13:18:34.785399 kubelet[3309]: I1216 13:18:34.784702 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b59f\" (UniqueName: \"kubernetes.io/projected/0841f344-51c4-439d-ae21-35f7829c767b-kube-api-access-8b59f\") pod \"coredns-668d6bf9bc-99pm5\" (UID: \"0841f344-51c4-439d-ae21-35f7829c767b\") " pod="kube-system/coredns-668d6bf9bc-99pm5" Dec 16 13:18:34.785399 kubelet[3309]: I1216 13:18:34.784720 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa364a9d-bf3b-4e47-9ccb-93bdce84b381-config\") pod \"goldmane-666569f655-kcbpt\" (UID: \"fa364a9d-bf3b-4e47-9ccb-93bdce84b381\") " pod="calico-system/goldmane-666569f655-kcbpt" Dec 16 13:18:34.785399 kubelet[3309]: I1216 13:18:34.784739 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td5ww\" (UniqueName: \"kubernetes.io/projected/d6bb23e0-1cd2-43cb-848c-d51c96ac4611-kube-api-access-td5ww\") pod \"whisker-d66b46c49-n5g75\" (UID: \"d6bb23e0-1cd2-43cb-848c-d51c96ac4611\") " pod="calico-system/whisker-d66b46c49-n5g75" Dec 16 13:18:34.785399 kubelet[3309]: I1216 13:18:34.784755 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbvqp\" (UniqueName: \"kubernetes.io/projected/fa364a9d-bf3b-4e47-9ccb-93bdce84b381-kube-api-access-pbvqp\") pod \"goldmane-666569f655-kcbpt\" (UID: \"fa364a9d-bf3b-4e47-9ccb-93bdce84b381\") " pod="calico-system/goldmane-666569f655-kcbpt" Dec 16 13:18:34.785399 kubelet[3309]: I1216 13:18:34.784785 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0841f344-51c4-439d-ae21-35f7829c767b-config-volume\") pod \"coredns-668d6bf9bc-99pm5\" (UID: \"0841f344-51c4-439d-ae21-35f7829c767b\") " pod="kube-system/coredns-668d6bf9bc-99pm5" Dec 16 13:18:34.785554 kubelet[3309]: I1216 13:18:34.784802 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/51551839-7827-49fc-85bc-04ba08e6c0fe-calico-apiserver-certs\") pod \"calico-apiserver-7c88dfcf94-jsc58\" (UID: \"51551839-7827-49fc-85bc-04ba08e6c0fe\") " pod="calico-apiserver/calico-apiserver-7c88dfcf94-jsc58" Dec 16 13:18:34.785554 kubelet[3309]: I1216 13:18:34.784823 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f26b9dbd-da7d-4346-92a8-22a4fd24016d-calico-apiserver-certs\") pod \"calico-apiserver-7c88dfcf94-hn4df\" (UID: \"f26b9dbd-da7d-4346-92a8-22a4fd24016d\") " pod="calico-apiserver/calico-apiserver-7c88dfcf94-hn4df" Dec 16 13:18:34.785554 kubelet[3309]: I1216 13:18:34.784839 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6bb23e0-1cd2-43cb-848c-d51c96ac4611-whisker-ca-bundle\") pod \"whisker-d66b46c49-n5g75\" (UID: \"d6bb23e0-1cd2-43cb-848c-d51c96ac4611\") " pod="calico-system/whisker-d66b46c49-n5g75" Dec 16 13:18:34.785554 kubelet[3309]: I1216 13:18:34.784860 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dadeccb-8701-4c2b-8c6d-546d297b7e36-config-volume\") pod \"coredns-668d6bf9bc-9b4vq\" (UID: \"2dadeccb-8701-4c2b-8c6d-546d297b7e36\") " pod="kube-system/coredns-668d6bf9bc-9b4vq" Dec 16 13:18:34.785554 kubelet[3309]: I1216 13:18:34.784874 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fa364a9d-bf3b-4e47-9ccb-93bdce84b381-goldmane-key-pair\") pod \"goldmane-666569f655-kcbpt\" (UID: \"fa364a9d-bf3b-4e47-9ccb-93bdce84b381\") " pod="calico-system/goldmane-666569f655-kcbpt" Dec 16 13:18:34.785708 kubelet[3309]: I1216 13:18:34.784893 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx4qm\" (UniqueName: \"kubernetes.io/projected/51551839-7827-49fc-85bc-04ba08e6c0fe-kube-api-access-hx4qm\") pod \"calico-apiserver-7c88dfcf94-jsc58\" (UID: \"51551839-7827-49fc-85bc-04ba08e6c0fe\") " pod="calico-apiserver/calico-apiserver-7c88dfcf94-jsc58" Dec 16 13:18:34.785708 kubelet[3309]: I1216 13:18:34.784908 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d6bb23e0-1cd2-43cb-848c-d51c96ac4611-whisker-backend-key-pair\") pod \"whisker-d66b46c49-n5g75\" (UID: \"d6bb23e0-1cd2-43cb-848c-d51c96ac4611\") " pod="calico-system/whisker-d66b46c49-n5g75" Dec 16 13:18:35.008205 containerd[1997]: time="2025-12-16T13:18:35.008089184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-99pm5,Uid:0841f344-51c4-439d-ae21-35f7829c767b,Namespace:kube-system,Attempt:0,}" Dec 16 13:18:35.023153 containerd[1997]: time="2025-12-16T13:18:35.021605562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9b4vq,Uid:2dadeccb-8701-4c2b-8c6d-546d297b7e36,Namespace:kube-system,Attempt:0,}" Dec 16 13:18:35.041007 containerd[1997]: time="2025-12-16T13:18:35.040590603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c88dfcf94-jsc58,Uid:51551839-7827-49fc-85bc-04ba08e6c0fe,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:18:35.051097 containerd[1997]: time="2025-12-16T13:18:35.051028029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d66b46c49-n5g75,Uid:d6bb23e0-1cd2-43cb-848c-d51c96ac4611,Namespace:calico-system,Attempt:0,}" Dec 16 13:18:35.063592 containerd[1997]: time="2025-12-16T13:18:35.063551313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c88dfcf94-hn4df,Uid:f26b9dbd-da7d-4346-92a8-22a4fd24016d,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:18:35.109473 containerd[1997]: time="2025-12-16T13:18:35.109436072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d6c9b69bb-zds8r,Uid:5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4,Namespace:calico-system,Attempt:0,}" Dec 16 13:18:35.153329 containerd[1997]: time="2025-12-16T13:18:35.152785259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 13:18:35.429658 containerd[1997]: time="2025-12-16T13:18:35.429590307Z" level=error msg="Failed to destroy network for sandbox \"de360bfe3c1dd9f0daa012f04c41dcfba793978a73e678057588f75b8b32c714\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:18:35.433421 containerd[1997]: time="2025-12-16T13:18:35.433363379Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c88dfcf94-hn4df,Uid:f26b9dbd-da7d-4346-92a8-22a4fd24016d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"de360bfe3c1dd9f0daa012f04c41dcfba793978a73e678057588f75b8b32c714\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:18:35.456079 kubelet[3309]: E1216 13:18:35.455359 3309 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de360bfe3c1dd9f0daa012f04c41dcfba793978a73e678057588f75b8b32c714\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:18:35.458287 kubelet[3309]: E1216 13:18:35.458240 3309 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de360bfe3c1dd9f0daa012f04c41dcfba793978a73e678057588f75b8b32c714\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c88dfcf94-hn4df" Dec 16 13:18:35.458445 kubelet[3309]: E1216 13:18:35.458296 3309 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de360bfe3c1dd9f0daa012f04c41dcfba793978a73e678057588f75b8b32c714\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c88dfcf94-hn4df" Dec 16 13:18:35.461405 containerd[1997]: time="2025-12-16T13:18:35.461358949Z" level=error msg="Failed to destroy network for sandbox \"d8d4077d4ad22a278241627b2dc3a6fa75ea114f99193a8fbe1bbb0e5256040a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:18:35.465036 kubelet[3309]: E1216 13:18:35.464983 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c88dfcf94-hn4df_calico-apiserver(f26b9dbd-da7d-4346-92a8-22a4fd24016d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c88dfcf94-hn4df_calico-apiserver(f26b9dbd-da7d-4346-92a8-22a4fd24016d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de360bfe3c1dd9f0daa012f04c41dcfba793978a73e678057588f75b8b32c714\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c88dfcf94-hn4df" podUID="f26b9dbd-da7d-4346-92a8-22a4fd24016d" Dec 16 13:18:35.465481 containerd[1997]: time="2025-12-16T13:18:35.465425574Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-99pm5,Uid:0841f344-51c4-439d-ae21-35f7829c767b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8d4077d4ad22a278241627b2dc3a6fa75ea114f99193a8fbe1bbb0e5256040a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:18:35.466529 kubelet[3309]: E1216 13:18:35.466489 3309 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8d4077d4ad22a278241627b2dc3a6fa75ea114f99193a8fbe1bbb0e5256040a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:18:35.466615 kubelet[3309]: E1216 13:18:35.466563 3309 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8d4077d4ad22a278241627b2dc3a6fa75ea114f99193a8fbe1bbb0e5256040a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-99pm5" Dec 16 13:18:35.466615 kubelet[3309]: E1216 13:18:35.466591 3309 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8d4077d4ad22a278241627b2dc3a6fa75ea114f99193a8fbe1bbb0e5256040a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-99pm5" Dec 16 13:18:35.466799 kubelet[3309]: E1216 13:18:35.466636 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-99pm5_kube-system(0841f344-51c4-439d-ae21-35f7829c767b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-99pm5_kube-system(0841f344-51c4-439d-ae21-35f7829c767b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8d4077d4ad22a278241627b2dc3a6fa75ea114f99193a8fbe1bbb0e5256040a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-99pm5" podUID="0841f344-51c4-439d-ae21-35f7829c767b" Dec 16 13:18:35.476708 containerd[1997]: time="2025-12-16T13:18:35.476574004Z" level=error msg="Failed to destroy network for sandbox \"e3c6eaa9f2df44628a9d5f7247788cda7648e7b647f199cf85174a129a9c4ad5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:18:35.489423 containerd[1997]: time="2025-12-16T13:18:35.489352409Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d66b46c49-n5g75,Uid:d6bb23e0-1cd2-43cb-848c-d51c96ac4611,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3c6eaa9f2df44628a9d5f7247788cda7648e7b647f199cf85174a129a9c4ad5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:18:35.489821 kubelet[3309]: E1216 13:18:35.489689 3309 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3c6eaa9f2df44628a9d5f7247788cda7648e7b647f199cf85174a129a9c4ad5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:18:35.489821 kubelet[3309]: E1216 13:18:35.489754 3309 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3c6eaa9f2df44628a9d5f7247788cda7648e7b647f199cf85174a129a9c4ad5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d66b46c49-n5g75" Dec 16 13:18:35.489821 kubelet[3309]: E1216 13:18:35.489779 3309 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3c6eaa9f2df44628a9d5f7247788cda7648e7b647f199cf85174a129a9c4ad5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d66b46c49-n5g75" Dec 16 13:18:35.490433 kubelet[3309]: E1216 13:18:35.489827 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-d66b46c49-n5g75_calico-system(d6bb23e0-1cd2-43cb-848c-d51c96ac4611)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-d66b46c49-n5g75_calico-system(d6bb23e0-1cd2-43cb-848c-d51c96ac4611)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3c6eaa9f2df44628a9d5f7247788cda7648e7b647f199cf85174a129a9c4ad5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-d66b46c49-n5g75" podUID="d6bb23e0-1cd2-43cb-848c-d51c96ac4611" Dec 16 13:18:35.494476 containerd[1997]: time="2025-12-16T13:18:35.494268611Z" level=error msg="Failed to destroy network for sandbox \"3cb0c5e77c50dbc78f612e199c0c7ff7b5bec204db50c916e5aeb67b805dd799\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:18:35.494476 containerd[1997]: time="2025-12-16T13:18:35.494406692Z" level=error msg="Failed to destroy network for sandbox \"1481d9b0115a09f8f4a94dd258e82f451c7a5790a999bc7fe9575d7c7e549128\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:18:35.515158 containerd[1997]: time="2025-12-16T13:18:35.498663490Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c88dfcf94-jsc58,Uid:51551839-7827-49fc-85bc-04ba08e6c0fe,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1481d9b0115a09f8f4a94dd258e82f451c7a5790a999bc7fe9575d7c7e549128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:18:35.515817 containerd[1997]: time="2025-12-16T13:18:35.504783711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9b4vq,Uid:2dadeccb-8701-4c2b-8c6d-546d297b7e36,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cb0c5e77c50dbc78f612e199c0c7ff7b5bec204db50c916e5aeb67b805dd799\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:18:35.515817 containerd[1997]: time="2025-12-16T13:18:35.505133861Z" level=error msg="Failed to destroy network for sandbox \"36137043735ebfb6a389a30326c50f5bfd12c7449dd6f22d96cee2cfd8c2efe3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:18:35.516512 kubelet[3309]: E1216 13:18:35.516168 3309 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1481d9b0115a09f8f4a94dd258e82f451c7a5790a999bc7fe9575d7c7e549128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:18:35.516512 kubelet[3309]: E1216 13:18:35.516219 3309 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1481d9b0115a09f8f4a94dd258e82f451c7a5790a999bc7fe9575d7c7e549128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c88dfcf94-jsc58" Dec 16 13:18:35.516512 kubelet[3309]: E1216 13:18:35.516239 3309 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1481d9b0115a09f8f4a94dd258e82f451c7a5790a999bc7fe9575d7c7e549128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c88dfcf94-jsc58" Dec 16 13:18:35.516788 kubelet[3309]: E1216 13:18:35.516410 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c88dfcf94-jsc58_calico-apiserver(51551839-7827-49fc-85bc-04ba08e6c0fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c88dfcf94-jsc58_calico-apiserver(51551839-7827-49fc-85bc-04ba08e6c0fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1481d9b0115a09f8f4a94dd258e82f451c7a5790a999bc7fe9575d7c7e549128\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c88dfcf94-jsc58" podUID="51551839-7827-49fc-85bc-04ba08e6c0fe" Dec 16 13:18:35.516788 kubelet[3309]: E1216 13:18:35.516524 3309 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cb0c5e77c50dbc78f612e199c0c7ff7b5bec204db50c916e5aeb67b805dd799\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:18:35.516788 kubelet[3309]: E1216 13:18:35.516559 3309 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cb0c5e77c50dbc78f612e199c0c7ff7b5bec204db50c916e5aeb67b805dd799\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9b4vq" Dec 16 13:18:35.516959 kubelet[3309]: E1216 13:18:35.516583 3309 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cb0c5e77c50dbc78f612e199c0c7ff7b5bec204db50c916e5aeb67b805dd799\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9b4vq" Dec 16 13:18:35.516959 kubelet[3309]: E1216 13:18:35.516620 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9b4vq_kube-system(2dadeccb-8701-4c2b-8c6d-546d297b7e36)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9b4vq_kube-system(2dadeccb-8701-4c2b-8c6d-546d297b7e36)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3cb0c5e77c50dbc78f612e199c0c7ff7b5bec204db50c916e5aeb67b805dd799\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9b4vq" podUID="2dadeccb-8701-4c2b-8c6d-546d297b7e36" Dec 16 13:18:35.519029 containerd[1997]: time="2025-12-16T13:18:35.518927148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d6c9b69bb-zds8r,Uid:5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"36137043735ebfb6a389a30326c50f5bfd12c7449dd6f22d96cee2cfd8c2efe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:18:35.519493 kubelet[3309]: E1216 13:18:35.519451 3309 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36137043735ebfb6a389a30326c50f5bfd12c7449dd6f22d96cee2cfd8c2efe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:18:35.519604 kubelet[3309]: E1216 13:18:35.519505 3309 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36137043735ebfb6a389a30326c50f5bfd12c7449dd6f22d96cee2cfd8c2efe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d6c9b69bb-zds8r" Dec 16 13:18:35.519604 kubelet[3309]: E1216 13:18:35.519523 3309 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36137043735ebfb6a389a30326c50f5bfd12c7449dd6f22d96cee2cfd8c2efe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d6c9b69bb-zds8r" Dec 16 13:18:35.519604 kubelet[3309]: E1216 13:18:35.519573 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d6c9b69bb-zds8r_calico-system(5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d6c9b69bb-zds8r_calico-system(5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36137043735ebfb6a389a30326c50f5bfd12c7449dd6f22d96cee2cfd8c2efe3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d6c9b69bb-zds8r" podUID="5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4" Dec 16 13:18:35.598262 systemd[1]: Created slice kubepods-besteffort-pod0d725eaf_63cb_4894_b5fe_56fa81c91e00.slice - libcontainer container kubepods-besteffort-pod0d725eaf_63cb_4894_b5fe_56fa81c91e00.slice. Dec 16 13:18:35.600972 containerd[1997]: time="2025-12-16T13:18:35.600939185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p2jzr,Uid:0d725eaf-63cb-4894-b5fe-56fa81c91e00,Namespace:calico-system,Attempt:0,}" Dec 16 13:18:35.714203 containerd[1997]: time="2025-12-16T13:18:35.714074155Z" level=error msg="Failed to destroy network for sandbox \"63f8987cd2b83096fafd795dad7ae17fb4473f78d9b3c60f238d975200bdf698\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:18:35.718937 systemd[1]: run-netns-cni\x2d51d3eb7c\x2d3450\x2d1c1a\x2d238c\x2d8b90e1e9e719.mount: Deactivated successfully. Dec 16 13:18:35.720020 containerd[1997]: time="2025-12-16T13:18:35.719966058Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p2jzr,Uid:0d725eaf-63cb-4894-b5fe-56fa81c91e00,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"63f8987cd2b83096fafd795dad7ae17fb4473f78d9b3c60f238d975200bdf698\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:18:35.720698 kubelet[3309]: E1216 13:18:35.720590 3309 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63f8987cd2b83096fafd795dad7ae17fb4473f78d9b3c60f238d975200bdf698\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:18:35.722248 kubelet[3309]: E1216 13:18:35.720673 3309 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63f8987cd2b83096fafd795dad7ae17fb4473f78d9b3c60f238d975200bdf698\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p2jzr" Dec 16 13:18:35.722248 kubelet[3309]: E1216 13:18:35.721175 3309 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63f8987cd2b83096fafd795dad7ae17fb4473f78d9b3c60f238d975200bdf698\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p2jzr" Dec 16 13:18:35.722248 kubelet[3309]: E1216 13:18:35.721257 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-p2jzr_calico-system(0d725eaf-63cb-4894-b5fe-56fa81c91e00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-p2jzr_calico-system(0d725eaf-63cb-4894-b5fe-56fa81c91e00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63f8987cd2b83096fafd795dad7ae17fb4473f78d9b3c60f238d975200bdf698\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-p2jzr" podUID="0d725eaf-63cb-4894-b5fe-56fa81c91e00" Dec 16 13:18:35.890183 kubelet[3309]: E1216 13:18:35.890051 3309 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Dec 16 13:18:35.890183 kubelet[3309]: E1216 13:18:35.890177 3309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fa364a9d-bf3b-4e47-9ccb-93bdce84b381-goldmane-ca-bundle podName:fa364a9d-bf3b-4e47-9ccb-93bdce84b381 nodeName:}" failed. No retries permitted until 2025-12-16 13:18:36.390154224 +0000 UTC m=+35.315768417 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/fa364a9d-bf3b-4e47-9ccb-93bdce84b381-goldmane-ca-bundle") pod "goldmane-666569f655-kcbpt" (UID: "fa364a9d-bf3b-4e47-9ccb-93bdce84b381") : failed to sync configmap cache: timed out waiting for the condition Dec 16 13:18:36.573305 containerd[1997]: time="2025-12-16T13:18:36.573257915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kcbpt,Uid:fa364a9d-bf3b-4e47-9ccb-93bdce84b381,Namespace:calico-system,Attempt:0,}" Dec 16 13:18:36.708938 containerd[1997]: time="2025-12-16T13:18:36.708880481Z" level=error msg="Failed to destroy network for sandbox \"a0fca6a2aaee7bb93ac2386f99fa24a1ddbbe1d7388a70f3f6a1de33bcdc0ce4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:18:36.713118 containerd[1997]: time="2025-12-16T13:18:36.712170283Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kcbpt,Uid:fa364a9d-bf3b-4e47-9ccb-93bdce84b381,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0fca6a2aaee7bb93ac2386f99fa24a1ddbbe1d7388a70f3f6a1de33bcdc0ce4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:18:36.713417 kubelet[3309]: E1216 13:18:36.712490 3309 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0fca6a2aaee7bb93ac2386f99fa24a1ddbbe1d7388a70f3f6a1de33bcdc0ce4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:18:36.713417 kubelet[3309]: E1216 13:18:36.712554 3309 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0fca6a2aaee7bb93ac2386f99fa24a1ddbbe1d7388a70f3f6a1de33bcdc0ce4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-kcbpt" Dec 16 13:18:36.713417 kubelet[3309]: E1216 13:18:36.712582 3309 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0fca6a2aaee7bb93ac2386f99fa24a1ddbbe1d7388a70f3f6a1de33bcdc0ce4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-kcbpt" Dec 16 13:18:36.713706 kubelet[3309]: E1216 13:18:36.712639 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-kcbpt_calico-system(fa364a9d-bf3b-4e47-9ccb-93bdce84b381)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-kcbpt_calico-system(fa364a9d-bf3b-4e47-9ccb-93bdce84b381)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0fca6a2aaee7bb93ac2386f99fa24a1ddbbe1d7388a70f3f6a1de33bcdc0ce4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-kcbpt" podUID="fa364a9d-bf3b-4e47-9ccb-93bdce84b381" Dec 16 13:18:36.716414 systemd[1]: run-netns-cni\x2d5f48facc\x2d9872\x2d0fa8\x2da503\x2d383351d83a1d.mount: Deactivated successfully. Dec 16 13:18:41.495954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount68355007.mount: Deactivated successfully. Dec 16 13:18:41.558608 containerd[1997]: time="2025-12-16T13:18:41.547438936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:18:41.565707 containerd[1997]: time="2025-12-16T13:18:41.564513048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Dec 16 13:18:41.568655 containerd[1997]: time="2025-12-16T13:18:41.568564144Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:18:41.573629 containerd[1997]: time="2025-12-16T13:18:41.573583315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:18:41.573980 containerd[1997]: time="2025-12-16T13:18:41.573959386Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.42107848s" Dec 16 13:18:41.574330 containerd[1997]: time="2025-12-16T13:18:41.574310487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 13:18:41.602809 containerd[1997]: time="2025-12-16T13:18:41.602767690Z" level=info msg="CreateContainer within sandbox \"c0be44b6b4cacd03ce53b6d0cb40c3328c0485b86c71c53e5deb549d6271825e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 13:18:41.666077 containerd[1997]: time="2025-12-16T13:18:41.665236104Z" level=info msg="Container 9d3e4316bd75ff170fa26ef8df6acad52ef4fa535c84c8e096eb414f941d9423: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:18:41.666045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount210530728.mount: Deactivated successfully. Dec 16 13:18:41.752207 containerd[1997]: time="2025-12-16T13:18:41.752100517Z" level=info msg="CreateContainer within sandbox \"c0be44b6b4cacd03ce53b6d0cb40c3328c0485b86c71c53e5deb549d6271825e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9d3e4316bd75ff170fa26ef8df6acad52ef4fa535c84c8e096eb414f941d9423\"" Dec 16 13:18:41.753720 containerd[1997]: time="2025-12-16T13:18:41.753680596Z" level=info msg="StartContainer for \"9d3e4316bd75ff170fa26ef8df6acad52ef4fa535c84c8e096eb414f941d9423\"" Dec 16 13:18:41.757072 containerd[1997]: time="2025-12-16T13:18:41.756556824Z" level=info msg="connecting to shim 9d3e4316bd75ff170fa26ef8df6acad52ef4fa535c84c8e096eb414f941d9423" address="unix:///run/containerd/s/84e8ed16ddba4bbd59f7bcb7bba6a8cef14da334a515300434f13233da0caac6" protocol=ttrpc version=3 Dec 16 13:18:41.895594 systemd[1]: Started cri-containerd-9d3e4316bd75ff170fa26ef8df6acad52ef4fa535c84c8e096eb414f941d9423.scope - libcontainer container 9d3e4316bd75ff170fa26ef8df6acad52ef4fa535c84c8e096eb414f941d9423. Dec 16 13:18:42.016817 containerd[1997]: time="2025-12-16T13:18:42.016718382Z" level=info msg="StartContainer for \"9d3e4316bd75ff170fa26ef8df6acad52ef4fa535c84c8e096eb414f941d9423\" returns successfully" Dec 16 13:18:42.160669 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 13:18:42.162377 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 13:18:42.215297 kubelet[3309]: I1216 13:18:42.214579 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nt4q2" podStartSLOduration=1.4190584689999999 podStartE2EDuration="19.214556138s" podCreationTimestamp="2025-12-16 13:18:23 +0000 UTC" firstStartedPulling="2025-12-16 13:18:23.779414247 +0000 UTC m=+22.705028445" lastFinishedPulling="2025-12-16 13:18:41.574911918 +0000 UTC m=+40.500526114" observedRunningTime="2025-12-16 13:18:42.214240079 +0000 UTC m=+41.139854282" watchObservedRunningTime="2025-12-16 13:18:42.214556138 +0000 UTC m=+41.140170342" Dec 16 13:18:42.659373 kubelet[3309]: I1216 13:18:42.658963 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d6bb23e0-1cd2-43cb-848c-d51c96ac4611-whisker-backend-key-pair\") pod \"d6bb23e0-1cd2-43cb-848c-d51c96ac4611\" (UID: \"d6bb23e0-1cd2-43cb-848c-d51c96ac4611\") " Dec 16 13:18:42.659373 kubelet[3309]: I1216 13:18:42.659026 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td5ww\" (UniqueName: \"kubernetes.io/projected/d6bb23e0-1cd2-43cb-848c-d51c96ac4611-kube-api-access-td5ww\") pod \"d6bb23e0-1cd2-43cb-848c-d51c96ac4611\" (UID: \"d6bb23e0-1cd2-43cb-848c-d51c96ac4611\") " Dec 16 13:18:42.661087 kubelet[3309]: I1216 13:18:42.660018 3309 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6bb23e0-1cd2-43cb-848c-d51c96ac4611-whisker-ca-bundle\") pod \"d6bb23e0-1cd2-43cb-848c-d51c96ac4611\" (UID: \"d6bb23e0-1cd2-43cb-848c-d51c96ac4611\") " Dec 16 13:18:42.661927 kubelet[3309]: I1216 13:18:42.661891 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6bb23e0-1cd2-43cb-848c-d51c96ac4611-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d6bb23e0-1cd2-43cb-848c-d51c96ac4611" (UID: "d6bb23e0-1cd2-43cb-848c-d51c96ac4611"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:18:42.687568 kubelet[3309]: I1216 13:18:42.685586 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6bb23e0-1cd2-43cb-848c-d51c96ac4611-kube-api-access-td5ww" (OuterVolumeSpecName: "kube-api-access-td5ww") pod "d6bb23e0-1cd2-43cb-848c-d51c96ac4611" (UID: "d6bb23e0-1cd2-43cb-848c-d51c96ac4611"). InnerVolumeSpecName "kube-api-access-td5ww". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:18:42.689364 kubelet[3309]: I1216 13:18:42.686206 3309 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6bb23e0-1cd2-43cb-848c-d51c96ac4611-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d6bb23e0-1cd2-43cb-848c-d51c96ac4611" (UID: "d6bb23e0-1cd2-43cb-848c-d51c96ac4611"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:18:42.690780 systemd[1]: var-lib-kubelet-pods-d6bb23e0\x2d1cd2\x2d43cb\x2d848c\x2dd51c96ac4611-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtd5ww.mount: Deactivated successfully. Dec 16 13:18:42.690953 systemd[1]: var-lib-kubelet-pods-d6bb23e0\x2d1cd2\x2d43cb\x2d848c\x2dd51c96ac4611-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 13:18:42.760452 kubelet[3309]: I1216 13:18:42.760399 3309 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d6bb23e0-1cd2-43cb-848c-d51c96ac4611-whisker-backend-key-pair\") on node \"ip-172-31-26-5\" DevicePath \"\"" Dec 16 13:18:42.760452 kubelet[3309]: I1216 13:18:42.760440 3309 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-td5ww\" (UniqueName: \"kubernetes.io/projected/d6bb23e0-1cd2-43cb-848c-d51c96ac4611-kube-api-access-td5ww\") on node \"ip-172-31-26-5\" DevicePath \"\"" Dec 16 13:18:42.760452 kubelet[3309]: I1216 13:18:42.760455 3309 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6bb23e0-1cd2-43cb-848c-d51c96ac4611-whisker-ca-bundle\") on node \"ip-172-31-26-5\" DevicePath \"\"" Dec 16 13:18:43.179151 systemd[1]: Removed slice kubepods-besteffort-podd6bb23e0_1cd2_43cb_848c_d51c96ac4611.slice - libcontainer container kubepods-besteffort-podd6bb23e0_1cd2_43cb_848c_d51c96ac4611.slice. Dec 16 13:18:43.321188 systemd[1]: Created slice kubepods-besteffort-podc5ba7f20_53de_469e_ab23_68e66f875f69.slice - libcontainer container kubepods-besteffort-podc5ba7f20_53de_469e_ab23_68e66f875f69.slice. Dec 16 13:18:43.465140 kubelet[3309]: I1216 13:18:43.464957 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c5ba7f20-53de-469e-ab23-68e66f875f69-whisker-backend-key-pair\") pod \"whisker-6746fb9d97-xwbhb\" (UID: \"c5ba7f20-53de-469e-ab23-68e66f875f69\") " pod="calico-system/whisker-6746fb9d97-xwbhb" Dec 16 13:18:43.465140 kubelet[3309]: I1216 13:18:43.465008 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5ba7f20-53de-469e-ab23-68e66f875f69-whisker-ca-bundle\") pod \"whisker-6746fb9d97-xwbhb\" (UID: \"c5ba7f20-53de-469e-ab23-68e66f875f69\") " pod="calico-system/whisker-6746fb9d97-xwbhb" Dec 16 13:18:43.465140 kubelet[3309]: I1216 13:18:43.465028 3309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pxf5\" (UniqueName: \"kubernetes.io/projected/c5ba7f20-53de-469e-ab23-68e66f875f69-kube-api-access-7pxf5\") pod \"whisker-6746fb9d97-xwbhb\" (UID: \"c5ba7f20-53de-469e-ab23-68e66f875f69\") " pod="calico-system/whisker-6746fb9d97-xwbhb" Dec 16 13:18:43.579082 kubelet[3309]: I1216 13:18:43.578962 3309 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6bb23e0-1cd2-43cb-848c-d51c96ac4611" path="/var/lib/kubelet/pods/d6bb23e0-1cd2-43cb-848c-d51c96ac4611/volumes" Dec 16 13:18:43.626420 containerd[1997]: time="2025-12-16T13:18:43.626371549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6746fb9d97-xwbhb,Uid:c5ba7f20-53de-469e-ab23-68e66f875f69,Namespace:calico-system,Attempt:0,}" Dec 16 13:18:44.298729 (udev-worker)[4481]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:18:44.304766 systemd-networkd[1860]: cali68cfd16a5fc: Link UP Dec 16 13:18:44.306713 systemd-networkd[1860]: cali68cfd16a5fc: Gained carrier Dec 16 13:18:44.348320 containerd[1997]: 2025-12-16 13:18:43.681 [INFO][4602] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:18:44.348320 containerd[1997]: 2025-12-16 13:18:43.748 [INFO][4602] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--5-k8s-whisker--6746fb9d97--xwbhb-eth0 whisker-6746fb9d97- calico-system c5ba7f20-53de-469e-ab23-68e66f875f69 878 0 2025-12-16 13:18:43 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6746fb9d97 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-26-5 whisker-6746fb9d97-xwbhb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali68cfd16a5fc [] [] }} ContainerID="e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5" Namespace="calico-system" Pod="whisker-6746fb9d97-xwbhb" WorkloadEndpoint="ip--172--31--26--5-k8s-whisker--6746fb9d97--xwbhb-" Dec 16 13:18:44.348320 containerd[1997]: 2025-12-16 13:18:43.748 [INFO][4602] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5" Namespace="calico-system" Pod="whisker-6746fb9d97-xwbhb" WorkloadEndpoint="ip--172--31--26--5-k8s-whisker--6746fb9d97--xwbhb-eth0" Dec 16 13:18:44.348320 containerd[1997]: 2025-12-16 13:18:44.164 [INFO][4614] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5" HandleID="k8s-pod-network.e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5" Workload="ip--172--31--26--5-k8s-whisker--6746fb9d97--xwbhb-eth0" Dec 16 13:18:44.350027 containerd[1997]: 2025-12-16 13:18:44.165 [INFO][4614] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5" HandleID="k8s-pod-network.e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5" Workload="ip--172--31--26--5-k8s-whisker--6746fb9d97--xwbhb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f880), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-5", "pod":"whisker-6746fb9d97-xwbhb", "timestamp":"2025-12-16 13:18:44.163997259 +0000 UTC"}, Hostname:"ip-172-31-26-5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:18:44.350027 containerd[1997]: 2025-12-16 13:18:44.165 [INFO][4614] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:18:44.350027 containerd[1997]: 2025-12-16 13:18:44.166 [INFO][4614] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:18:44.350027 containerd[1997]: 2025-12-16 13:18:44.167 [INFO][4614] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-5' Dec 16 13:18:44.350027 containerd[1997]: 2025-12-16 13:18:44.202 [INFO][4614] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5" host="ip-172-31-26-5" Dec 16 13:18:44.350027 containerd[1997]: 2025-12-16 13:18:44.230 [INFO][4614] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-5" Dec 16 13:18:44.350027 containerd[1997]: 2025-12-16 13:18:44.246 [INFO][4614] ipam/ipam.go 511: Trying affinity for 192.168.82.0/26 host="ip-172-31-26-5" Dec 16 13:18:44.350027 containerd[1997]: 2025-12-16 13:18:44.250 [INFO][4614] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.0/26 host="ip-172-31-26-5" Dec 16 13:18:44.350027 containerd[1997]: 2025-12-16 13:18:44.253 [INFO][4614] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.0/26 host="ip-172-31-26-5" Dec 16 13:18:44.350027 containerd[1997]: 2025-12-16 13:18:44.253 [INFO][4614] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.82.0/26 handle="k8s-pod-network.e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5" host="ip-172-31-26-5" Dec 16 13:18:44.355965 containerd[1997]: 2025-12-16 13:18:44.256 [INFO][4614] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5 Dec 16 13:18:44.355965 containerd[1997]: 2025-12-16 13:18:44.264 [INFO][4614] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.82.0/26 handle="k8s-pod-network.e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5" host="ip-172-31-26-5" Dec 16 13:18:44.355965 containerd[1997]: 2025-12-16 13:18:44.274 [INFO][4614] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.82.1/26] block=192.168.82.0/26 handle="k8s-pod-network.e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5" host="ip-172-31-26-5" Dec 16 13:18:44.355965 containerd[1997]: 2025-12-16 13:18:44.274 [INFO][4614] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.1/26] handle="k8s-pod-network.e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5" host="ip-172-31-26-5" Dec 16 13:18:44.355965 containerd[1997]: 2025-12-16 13:18:44.274 [INFO][4614] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:18:44.355965 containerd[1997]: 2025-12-16 13:18:44.274 [INFO][4614] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.82.1/26] IPv6=[] ContainerID="e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5" HandleID="k8s-pod-network.e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5" Workload="ip--172--31--26--5-k8s-whisker--6746fb9d97--xwbhb-eth0" Dec 16 13:18:44.361016 containerd[1997]: 2025-12-16 13:18:44.280 [INFO][4602] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5" Namespace="calico-system" Pod="whisker-6746fb9d97-xwbhb" WorkloadEndpoint="ip--172--31--26--5-k8s-whisker--6746fb9d97--xwbhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--5-k8s-whisker--6746fb9d97--xwbhb-eth0", GenerateName:"whisker-6746fb9d97-", Namespace:"calico-system", SelfLink:"", UID:"c5ba7f20-53de-469e-ab23-68e66f875f69", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 18, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6746fb9d97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-5", ContainerID:"", Pod:"whisker-6746fb9d97-xwbhb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.82.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali68cfd16a5fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:18:44.361016 containerd[1997]: 2025-12-16 13:18:44.281 [INFO][4602] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.1/32] ContainerID="e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5" Namespace="calico-system" Pod="whisker-6746fb9d97-xwbhb" WorkloadEndpoint="ip--172--31--26--5-k8s-whisker--6746fb9d97--xwbhb-eth0" Dec 16 13:18:44.363293 containerd[1997]: 2025-12-16 13:18:44.281 [INFO][4602] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68cfd16a5fc ContainerID="e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5" Namespace="calico-system" Pod="whisker-6746fb9d97-xwbhb" WorkloadEndpoint="ip--172--31--26--5-k8s-whisker--6746fb9d97--xwbhb-eth0" Dec 16 13:18:44.363293 containerd[1997]: 2025-12-16 13:18:44.308 [INFO][4602] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5" Namespace="calico-system" Pod="whisker-6746fb9d97-xwbhb" WorkloadEndpoint="ip--172--31--26--5-k8s-whisker--6746fb9d97--xwbhb-eth0" Dec 16 13:18:44.364014 containerd[1997]: 2025-12-16 13:18:44.309 [INFO][4602] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5" Namespace="calico-system" Pod="whisker-6746fb9d97-xwbhb" WorkloadEndpoint="ip--172--31--26--5-k8s-whisker--6746fb9d97--xwbhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--5-k8s-whisker--6746fb9d97--xwbhb-eth0", GenerateName:"whisker-6746fb9d97-", Namespace:"calico-system", SelfLink:"", UID:"c5ba7f20-53de-469e-ab23-68e66f875f69", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 18, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6746fb9d97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-5", ContainerID:"e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5", Pod:"whisker-6746fb9d97-xwbhb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.82.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali68cfd16a5fc", MAC:"ae:24:09:a0:66:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:18:44.365914 containerd[1997]: 2025-12-16 13:18:44.340 [INFO][4602] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5" Namespace="calico-system" Pod="whisker-6746fb9d97-xwbhb" WorkloadEndpoint="ip--172--31--26--5-k8s-whisker--6746fb9d97--xwbhb-eth0" Dec 16 13:18:44.615641 containerd[1997]: time="2025-12-16T13:18:44.615553596Z" level=info msg="connecting to shim e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5" address="unix:///run/containerd/s/ac2421439379d3094595b9d8a111c94fbd3cc31322ee7a0ff08dc83e0f319ba0" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:18:44.649463 systemd[1]: Started cri-containerd-e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5.scope - libcontainer container e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5. Dec 16 13:18:44.738316 containerd[1997]: time="2025-12-16T13:18:44.737731731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6746fb9d97-xwbhb,Uid:c5ba7f20-53de-469e-ab23-68e66f875f69,Namespace:calico-system,Attempt:0,} returns sandbox id \"e7a4e304cf216542f93b9b20662a6164b8399dc7e609c6832468b3419c5d23a5\"" Dec 16 13:18:44.753425 containerd[1997]: time="2025-12-16T13:18:44.753375560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:18:45.034243 containerd[1997]: time="2025-12-16T13:18:45.034105471Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:18:45.036659 containerd[1997]: time="2025-12-16T13:18:45.036523660Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:18:45.036659 containerd[1997]: time="2025-12-16T13:18:45.036578241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:18:45.036922 kubelet[3309]: E1216 13:18:45.036844 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:18:45.036922 kubelet[3309]: E1216 13:18:45.036907 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:18:45.058166 kubelet[3309]: E1216 13:18:45.058096 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3b88f196c3d5438da0becadb94577a42,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7pxf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6746fb9d97-xwbhb_calico-system(c5ba7f20-53de-469e-ab23-68e66f875f69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:18:45.061735 containerd[1997]: time="2025-12-16T13:18:45.061694933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:18:45.333830 containerd[1997]: time="2025-12-16T13:18:45.333705207Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:18:45.335817 containerd[1997]: time="2025-12-16T13:18:45.335747203Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:18:45.335972 containerd[1997]: time="2025-12-16T13:18:45.335836781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:18:45.336038 kubelet[3309]: E1216 13:18:45.335999 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:18:45.336112 kubelet[3309]: E1216 13:18:45.336046 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:18:45.336305 kubelet[3309]: E1216 13:18:45.336165 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7pxf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6746fb9d97-xwbhb_calico-system(c5ba7f20-53de-469e-ab23-68e66f875f69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:18:45.337650 kubelet[3309]: E1216 13:18:45.337607 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6746fb9d97-xwbhb" podUID="c5ba7f20-53de-469e-ab23-68e66f875f69" Dec 16 13:18:46.177321 systemd-networkd[1860]: cali68cfd16a5fc: Gained IPv6LL Dec 16 13:18:46.181927 kubelet[3309]: E1216 13:18:46.181798 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6746fb9d97-xwbhb" podUID="c5ba7f20-53de-469e-ab23-68e66f875f69" Dec 16 13:18:46.573019 containerd[1997]: time="2025-12-16T13:18:46.572749872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d6c9b69bb-zds8r,Uid:5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4,Namespace:calico-system,Attempt:0,}" Dec 16 13:18:46.573019 containerd[1997]: time="2025-12-16T13:18:46.572783189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c88dfcf94-jsc58,Uid:51551839-7827-49fc-85bc-04ba08e6c0fe,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:18:46.750448 systemd-networkd[1860]: calif21c9082992: Link UP Dec 16 13:18:46.751732 systemd-networkd[1860]: calif21c9082992: Gained carrier Dec 16 13:18:46.800667 containerd[1997]: 2025-12-16 13:18:46.613 [INFO][4819] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:18:46.800667 containerd[1997]: 2025-12-16 13:18:46.628 [INFO][4819] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--5-k8s-calico--kube--controllers--5d6c9b69bb--zds8r-eth0 calico-kube-controllers-5d6c9b69bb- calico-system 5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4 810 0 2025-12-16 13:18:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d6c9b69bb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-26-5 calico-kube-controllers-5d6c9b69bb-zds8r eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif21c9082992 [] [] }} ContainerID="74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee" Namespace="calico-system" Pod="calico-kube-controllers-5d6c9b69bb-zds8r" WorkloadEndpoint="ip--172--31--26--5-k8s-calico--kube--controllers--5d6c9b69bb--zds8r-" Dec 16 13:18:46.800667 containerd[1997]: 2025-12-16 13:18:46.628 [INFO][4819] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee" Namespace="calico-system" Pod="calico-kube-controllers-5d6c9b69bb-zds8r" WorkloadEndpoint="ip--172--31--26--5-k8s-calico--kube--controllers--5d6c9b69bb--zds8r-eth0" Dec 16 13:18:46.800667 containerd[1997]: 2025-12-16 13:18:46.678 [INFO][4843] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee" HandleID="k8s-pod-network.74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee" Workload="ip--172--31--26--5-k8s-calico--kube--controllers--5d6c9b69bb--zds8r-eth0" Dec 16 13:18:46.801010 containerd[1997]: 2025-12-16 13:18:46.681 [INFO][4843] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee" HandleID="k8s-pod-network.74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee" Workload="ip--172--31--26--5-k8s-calico--kube--controllers--5d6c9b69bb--zds8r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5030), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-5", "pod":"calico-kube-controllers-5d6c9b69bb-zds8r", "timestamp":"2025-12-16 13:18:46.678924206 +0000 UTC"}, Hostname:"ip-172-31-26-5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:18:46.801010 containerd[1997]: 2025-12-16 13:18:46.681 [INFO][4843] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:18:46.801010 containerd[1997]: 2025-12-16 13:18:46.681 [INFO][4843] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:18:46.801010 containerd[1997]: 2025-12-16 13:18:46.681 [INFO][4843] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-5' Dec 16 13:18:46.801010 containerd[1997]: 2025-12-16 13:18:46.691 [INFO][4843] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee" host="ip-172-31-26-5" Dec 16 13:18:46.801010 containerd[1997]: 2025-12-16 13:18:46.698 [INFO][4843] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-5" Dec 16 13:18:46.801010 containerd[1997]: 2025-12-16 13:18:46.705 [INFO][4843] ipam/ipam.go 511: Trying affinity for 192.168.82.0/26 host="ip-172-31-26-5" Dec 16 13:18:46.801010 containerd[1997]: 2025-12-16 13:18:46.707 [INFO][4843] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.0/26 host="ip-172-31-26-5" Dec 16 13:18:46.801010 containerd[1997]: 2025-12-16 13:18:46.710 [INFO][4843] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.0/26 host="ip-172-31-26-5" Dec 16 13:18:46.801466 containerd[1997]: 2025-12-16 13:18:46.710 [INFO][4843] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.82.0/26 handle="k8s-pod-network.74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee" host="ip-172-31-26-5" Dec 16 13:18:46.801466 containerd[1997]: 2025-12-16 13:18:46.712 [INFO][4843] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee Dec 16 13:18:46.801466 containerd[1997]: 2025-12-16 13:18:46.719 [INFO][4843] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.82.0/26 handle="k8s-pod-network.74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee" host="ip-172-31-26-5" Dec 16 13:18:46.801466 containerd[1997]: 2025-12-16 13:18:46.733 [INFO][4843] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.82.2/26] block=192.168.82.0/26 handle="k8s-pod-network.74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee" host="ip-172-31-26-5" Dec 16 13:18:46.801466 containerd[1997]: 2025-12-16 13:18:46.733 [INFO][4843] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.2/26] handle="k8s-pod-network.74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee" host="ip-172-31-26-5" Dec 16 13:18:46.801466 containerd[1997]: 2025-12-16 13:18:46.733 [INFO][4843] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:18:46.801466 containerd[1997]: 2025-12-16 13:18:46.733 [INFO][4843] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.82.2/26] IPv6=[] ContainerID="74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee" HandleID="k8s-pod-network.74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee" Workload="ip--172--31--26--5-k8s-calico--kube--controllers--5d6c9b69bb--zds8r-eth0" Dec 16 13:18:46.801742 containerd[1997]: 2025-12-16 13:18:46.743 [INFO][4819] cni-plugin/k8s.go 418: Populated endpoint ContainerID="74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee" Namespace="calico-system" Pod="calico-kube-controllers-5d6c9b69bb-zds8r" WorkloadEndpoint="ip--172--31--26--5-k8s-calico--kube--controllers--5d6c9b69bb--zds8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--5-k8s-calico--kube--controllers--5d6c9b69bb--zds8r-eth0", GenerateName:"calico-kube-controllers-5d6c9b69bb-", Namespace:"calico-system", SelfLink:"", UID:"5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 18, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d6c9b69bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-5", ContainerID:"", Pod:"calico-kube-controllers-5d6c9b69bb-zds8r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif21c9082992", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:18:46.801842 containerd[1997]: 2025-12-16 13:18:46.743 [INFO][4819] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.2/32] ContainerID="74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee" Namespace="calico-system" Pod="calico-kube-controllers-5d6c9b69bb-zds8r" WorkloadEndpoint="ip--172--31--26--5-k8s-calico--kube--controllers--5d6c9b69bb--zds8r-eth0" Dec 16 13:18:46.801842 containerd[1997]: 2025-12-16 13:18:46.743 [INFO][4819] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif21c9082992 ContainerID="74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee" Namespace="calico-system" Pod="calico-kube-controllers-5d6c9b69bb-zds8r" WorkloadEndpoint="ip--172--31--26--5-k8s-calico--kube--controllers--5d6c9b69bb--zds8r-eth0" Dec 16 13:18:46.801842 containerd[1997]: 2025-12-16 13:18:46.751 [INFO][4819] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee" Namespace="calico-system" Pod="calico-kube-controllers-5d6c9b69bb-zds8r" WorkloadEndpoint="ip--172--31--26--5-k8s-calico--kube--controllers--5d6c9b69bb--zds8r-eth0" Dec 16 13:18:46.801974 containerd[1997]: 2025-12-16 13:18:46.752 [INFO][4819] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee" Namespace="calico-system" Pod="calico-kube-controllers-5d6c9b69bb-zds8r" WorkloadEndpoint="ip--172--31--26--5-k8s-calico--kube--controllers--5d6c9b69bb--zds8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--5-k8s-calico--kube--controllers--5d6c9b69bb--zds8r-eth0", GenerateName:"calico-kube-controllers-5d6c9b69bb-", Namespace:"calico-system", SelfLink:"", UID:"5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 18, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d6c9b69bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-5", ContainerID:"74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee", Pod:"calico-kube-controllers-5d6c9b69bb-zds8r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif21c9082992", MAC:"ea:e7:fb:d0:19:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:18:46.802078 containerd[1997]: 2025-12-16 13:18:46.793 [INFO][4819] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee" Namespace="calico-system" Pod="calico-kube-controllers-5d6c9b69bb-zds8r" WorkloadEndpoint="ip--172--31--26--5-k8s-calico--kube--controllers--5d6c9b69bb--zds8r-eth0" Dec 16 13:18:46.872407 containerd[1997]: time="2025-12-16T13:18:46.872356926Z" level=info msg="connecting to shim 74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee" address="unix:///run/containerd/s/373a200f40163ea81daffb2658ee09cf30e2fc7be7fc67bfb5f065f0fe3c5177" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:18:46.948641 systemd[1]: Started cri-containerd-74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee.scope - libcontainer container 74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee. Dec 16 13:18:46.996919 systemd-networkd[1860]: cali2e36abc1e74: Link UP Dec 16 13:18:47.000520 systemd-networkd[1860]: cali2e36abc1e74: Gained carrier Dec 16 13:18:47.052874 containerd[1997]: 2025-12-16 13:18:46.612 [INFO][4814] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:18:47.052874 containerd[1997]: 2025-12-16 13:18:46.636 [INFO][4814] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--jsc58-eth0 calico-apiserver-7c88dfcf94- calico-apiserver 51551839-7827-49fc-85bc-04ba08e6c0fe 806 0 2025-12-16 13:18:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c88dfcf94 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-5 calico-apiserver-7c88dfcf94-jsc58 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2e36abc1e74 [] [] }} ContainerID="a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2" Namespace="calico-apiserver" Pod="calico-apiserver-7c88dfcf94-jsc58" WorkloadEndpoint="ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--jsc58-" Dec 16 13:18:47.052874 containerd[1997]: 2025-12-16 13:18:46.636 [INFO][4814] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2" Namespace="calico-apiserver" Pod="calico-apiserver-7c88dfcf94-jsc58" WorkloadEndpoint="ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--jsc58-eth0" Dec 16 13:18:47.052874 containerd[1997]: 2025-12-16 13:18:46.700 [INFO][4848] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2" HandleID="k8s-pod-network.a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2" Workload="ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--jsc58-eth0" Dec 16 13:18:47.054375 containerd[1997]: 2025-12-16 13:18:46.702 [INFO][4848] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2" HandleID="k8s-pod-network.a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2" Workload="ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--jsc58-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001038d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-5", "pod":"calico-apiserver-7c88dfcf94-jsc58", "timestamp":"2025-12-16 13:18:46.700817815 +0000 UTC"}, Hostname:"ip-172-31-26-5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:18:47.054375 containerd[1997]: 2025-12-16 13:18:46.702 [INFO][4848] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:18:47.054375 containerd[1997]: 2025-12-16 13:18:46.733 [INFO][4848] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:18:47.054375 containerd[1997]: 2025-12-16 13:18:46.733 [INFO][4848] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-5' Dec 16 13:18:47.054375 containerd[1997]: 2025-12-16 13:18:46.818 [INFO][4848] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2" host="ip-172-31-26-5" Dec 16 13:18:47.054375 containerd[1997]: 2025-12-16 13:18:46.841 [INFO][4848] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-5" Dec 16 13:18:47.054375 containerd[1997]: 2025-12-16 13:18:46.874 [INFO][4848] ipam/ipam.go 511: Trying affinity for 192.168.82.0/26 host="ip-172-31-26-5" Dec 16 13:18:47.054375 containerd[1997]: 2025-12-16 13:18:46.885 [INFO][4848] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.0/26 host="ip-172-31-26-5" Dec 16 13:18:47.054375 containerd[1997]: 2025-12-16 13:18:46.895 [INFO][4848] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.0/26 host="ip-172-31-26-5" Dec 16 13:18:47.055366 containerd[1997]: 2025-12-16 13:18:46.897 [INFO][4848] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.82.0/26 handle="k8s-pod-network.a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2" host="ip-172-31-26-5" Dec 16 13:18:47.055366 containerd[1997]: 2025-12-16 13:18:46.911 [INFO][4848] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2 Dec 16 13:18:47.055366 containerd[1997]: 2025-12-16 13:18:46.933 [INFO][4848] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.82.0/26 handle="k8s-pod-network.a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2" host="ip-172-31-26-5" Dec 16 13:18:47.055366 containerd[1997]: 2025-12-16 13:18:46.976 [INFO][4848] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.82.3/26] block=192.168.82.0/26 handle="k8s-pod-network.a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2" host="ip-172-31-26-5" Dec 16 13:18:47.055366 containerd[1997]: 2025-12-16 13:18:46.978 [INFO][4848] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.3/26] handle="k8s-pod-network.a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2" host="ip-172-31-26-5" Dec 16 13:18:47.055366 containerd[1997]: 2025-12-16 13:18:46.978 [INFO][4848] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:18:47.055366 containerd[1997]: 2025-12-16 13:18:46.978 [INFO][4848] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.82.3/26] IPv6=[] ContainerID="a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2" HandleID="k8s-pod-network.a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2" Workload="ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--jsc58-eth0" Dec 16 13:18:47.058424 containerd[1997]: 2025-12-16 13:18:46.988 [INFO][4814] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2" Namespace="calico-apiserver" Pod="calico-apiserver-7c88dfcf94-jsc58" WorkloadEndpoint="ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--jsc58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--jsc58-eth0", GenerateName:"calico-apiserver-7c88dfcf94-", Namespace:"calico-apiserver", SelfLink:"", UID:"51551839-7827-49fc-85bc-04ba08e6c0fe", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 18, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c88dfcf94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-5", ContainerID:"", Pod:"calico-apiserver-7c88dfcf94-jsc58", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e36abc1e74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:18:47.058550 containerd[1997]: 2025-12-16 13:18:46.988 [INFO][4814] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.3/32] ContainerID="a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2" Namespace="calico-apiserver" Pod="calico-apiserver-7c88dfcf94-jsc58" WorkloadEndpoint="ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--jsc58-eth0" Dec 16 13:18:47.058550 containerd[1997]: 2025-12-16 13:18:46.988 [INFO][4814] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e36abc1e74 ContainerID="a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2" Namespace="calico-apiserver" Pod="calico-apiserver-7c88dfcf94-jsc58" WorkloadEndpoint="ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--jsc58-eth0" Dec 16 13:18:47.058550 containerd[1997]: 2025-12-16 13:18:47.002 [INFO][4814] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2" Namespace="calico-apiserver" Pod="calico-apiserver-7c88dfcf94-jsc58" WorkloadEndpoint="ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--jsc58-eth0" Dec 16 13:18:47.058679 containerd[1997]: 2025-12-16 13:18:47.003 [INFO][4814] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2" Namespace="calico-apiserver" Pod="calico-apiserver-7c88dfcf94-jsc58" WorkloadEndpoint="ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--jsc58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--jsc58-eth0", GenerateName:"calico-apiserver-7c88dfcf94-", Namespace:"calico-apiserver", SelfLink:"", UID:"51551839-7827-49fc-85bc-04ba08e6c0fe", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 18, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c88dfcf94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-5", ContainerID:"a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2", Pod:"calico-apiserver-7c88dfcf94-jsc58", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e36abc1e74", MAC:"1a:91:2a:28:10:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:18:47.058775 containerd[1997]: 2025-12-16 13:18:47.044 [INFO][4814] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2" Namespace="calico-apiserver" Pod="calico-apiserver-7c88dfcf94-jsc58" WorkloadEndpoint="ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--jsc58-eth0" Dec 16 13:18:47.100543 containerd[1997]: time="2025-12-16T13:18:47.100501474Z" level=info msg="connecting to shim a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2" address="unix:///run/containerd/s/222b51a875f35b174e3d7f13508ddbd4748ccb19bf05a21ed6f09b96cdb968ef" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:18:47.143654 systemd[1]: Started cri-containerd-a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2.scope - libcontainer container a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2. Dec 16 13:18:47.199993 containerd[1997]: time="2025-12-16T13:18:47.199946722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d6c9b69bb-zds8r,Uid:5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4,Namespace:calico-system,Attempt:0,} returns sandbox id \"74ebd0518d9d6bd939e89178c5d89f85e6fd80a7eb0c51a9044d87cd81faa1ee\"" Dec 16 13:18:47.202093 containerd[1997]: time="2025-12-16T13:18:47.201942133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:18:47.264175 containerd[1997]: time="2025-12-16T13:18:47.264049339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c88dfcf94-jsc58,Uid:51551839-7827-49fc-85bc-04ba08e6c0fe,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a1df80d0a894bab1678c937d3943f065fe637a4d8ad1d9b8e5c89c41918914f2\"" Dec 16 13:18:47.452440 containerd[1997]: time="2025-12-16T13:18:47.452161867Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:18:47.454550 containerd[1997]: time="2025-12-16T13:18:47.454486923Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:18:47.454670 containerd[1997]: time="2025-12-16T13:18:47.454573925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:18:47.454764 kubelet[3309]: E1216 13:18:47.454721 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:18:47.455200 kubelet[3309]: E1216 13:18:47.454772 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:18:47.455200 kubelet[3309]: E1216 13:18:47.454971 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8h5xj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5d6c9b69bb-zds8r_calico-system(5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:18:47.456366 kubelet[3309]: E1216 13:18:47.456150 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d6c9b69bb-zds8r" podUID="5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4" Dec 16 13:18:47.456568 containerd[1997]: time="2025-12-16T13:18:47.456539064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:18:47.731967 containerd[1997]: time="2025-12-16T13:18:47.731841999Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:18:47.733931 containerd[1997]: time="2025-12-16T13:18:47.733889628Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:18:47.734074 containerd[1997]: time="2025-12-16T13:18:47.733999648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:18:47.734313 kubelet[3309]: E1216 13:18:47.734206 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:18:47.734313 kubelet[3309]: E1216 13:18:47.734274 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:18:47.734692 kubelet[3309]: E1216 13:18:47.734613 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hx4qm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c88dfcf94-jsc58_calico-apiserver(51551839-7827-49fc-85bc-04ba08e6c0fe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:18:47.735921 kubelet[3309]: E1216 13:18:47.735860 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c88dfcf94-jsc58" podUID="51551839-7827-49fc-85bc-04ba08e6c0fe" Dec 16 13:18:47.969350 systemd-networkd[1860]: calif21c9082992: Gained IPv6LL Dec 16 13:18:48.097462 systemd-networkd[1860]: cali2e36abc1e74: Gained IPv6LL Dec 16 13:18:48.204695 kubelet[3309]: E1216 13:18:48.204654 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d6c9b69bb-zds8r" podUID="5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4" Dec 16 13:18:48.208466 kubelet[3309]: E1216 13:18:48.208234 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c88dfcf94-jsc58" podUID="51551839-7827-49fc-85bc-04ba08e6c0fe" Dec 16 13:18:49.214505 kubelet[3309]: E1216 13:18:49.214459 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c88dfcf94-jsc58" podUID="51551839-7827-49fc-85bc-04ba08e6c0fe" Dec 16 13:18:49.215821 kubelet[3309]: E1216 13:18:49.215744 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d6c9b69bb-zds8r" podUID="5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4" Dec 16 13:18:49.582137 containerd[1997]: time="2025-12-16T13:18:49.582026714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kcbpt,Uid:fa364a9d-bf3b-4e47-9ccb-93bdce84b381,Namespace:calico-system,Attempt:0,}" Dec 16 13:18:49.582457 containerd[1997]: time="2025-12-16T13:18:49.582202473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9b4vq,Uid:2dadeccb-8701-4c2b-8c6d-546d297b7e36,Namespace:kube-system,Attempt:0,}" Dec 16 13:18:49.777693 systemd-networkd[1860]: cali7683f8cbd7b: Link UP Dec 16 13:18:49.781672 systemd-networkd[1860]: cali7683f8cbd7b: Gained carrier Dec 16 13:18:49.786651 kubelet[3309]: I1216 13:18:49.786557 3309 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:18:49.819959 containerd[1997]: 2025-12-16 13:18:49.625 [INFO][5014] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:18:49.819959 containerd[1997]: 2025-12-16 13:18:49.641 [INFO][5014] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--5-k8s-coredns--668d6bf9bc--9b4vq-eth0 coredns-668d6bf9bc- kube-system 2dadeccb-8701-4c2b-8c6d-546d297b7e36 804 0 2025-12-16 13:18:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-5 coredns-668d6bf9bc-9b4vq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7683f8cbd7b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f" Namespace="kube-system" Pod="coredns-668d6bf9bc-9b4vq" WorkloadEndpoint="ip--172--31--26--5-k8s-coredns--668d6bf9bc--9b4vq-" Dec 16 13:18:49.819959 containerd[1997]: 2025-12-16 13:18:49.641 [INFO][5014] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f" Namespace="kube-system" Pod="coredns-668d6bf9bc-9b4vq" WorkloadEndpoint="ip--172--31--26--5-k8s-coredns--668d6bf9bc--9b4vq-eth0" Dec 16 13:18:49.819959 containerd[1997]: 2025-12-16 13:18:49.695 [INFO][5040] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f" HandleID="k8s-pod-network.0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f" Workload="ip--172--31--26--5-k8s-coredns--668d6bf9bc--9b4vq-eth0" Dec 16 13:18:49.820333 containerd[1997]: 2025-12-16 13:18:49.695 [INFO][5040] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f" HandleID="k8s-pod-network.0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f" Workload="ip--172--31--26--5-k8s-coredns--668d6bf9bc--9b4vq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025afe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-5", "pod":"coredns-668d6bf9bc-9b4vq", "timestamp":"2025-12-16 13:18:49.695324088 +0000 UTC"}, Hostname:"ip-172-31-26-5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:18:49.820333 containerd[1997]: 2025-12-16 13:18:49.695 [INFO][5040] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:18:49.820333 containerd[1997]: 2025-12-16 13:18:49.695 [INFO][5040] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:18:49.820333 containerd[1997]: 2025-12-16 13:18:49.695 [INFO][5040] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-5' Dec 16 13:18:49.820333 containerd[1997]: 2025-12-16 13:18:49.708 [INFO][5040] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f" host="ip-172-31-26-5" Dec 16 13:18:49.820333 containerd[1997]: 2025-12-16 13:18:49.718 [INFO][5040] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-5" Dec 16 13:18:49.820333 containerd[1997]: 2025-12-16 13:18:49.731 [INFO][5040] ipam/ipam.go 511: Trying affinity for 192.168.82.0/26 host="ip-172-31-26-5" Dec 16 13:18:49.820333 containerd[1997]: 2025-12-16 13:18:49.734 [INFO][5040] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.0/26 host="ip-172-31-26-5" Dec 16 13:18:49.820333 containerd[1997]: 2025-12-16 13:18:49.741 [INFO][5040] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.0/26 host="ip-172-31-26-5" Dec 16 13:18:49.820333 containerd[1997]: 2025-12-16 13:18:49.741 [INFO][5040] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.82.0/26 handle="k8s-pod-network.0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f" host="ip-172-31-26-5" Dec 16 13:18:49.820753 containerd[1997]: 2025-12-16 13:18:49.746 [INFO][5040] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f Dec 16 13:18:49.820753 containerd[1997]: 2025-12-16 13:18:49.752 [INFO][5040] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.82.0/26 handle="k8s-pod-network.0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f" host="ip-172-31-26-5" Dec 16 13:18:49.820753 containerd[1997]: 2025-12-16 13:18:49.762 [INFO][5040] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.82.4/26] block=192.168.82.0/26 handle="k8s-pod-network.0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f" host="ip-172-31-26-5" Dec 16 13:18:49.820753 containerd[1997]: 2025-12-16 13:18:49.763 [INFO][5040] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.4/26] handle="k8s-pod-network.0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f" host="ip-172-31-26-5" Dec 16 13:18:49.820753 containerd[1997]: 2025-12-16 13:18:49.763 [INFO][5040] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:18:49.820753 containerd[1997]: 2025-12-16 13:18:49.763 [INFO][5040] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.82.4/26] IPv6=[] ContainerID="0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f" HandleID="k8s-pod-network.0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f" Workload="ip--172--31--26--5-k8s-coredns--668d6bf9bc--9b4vq-eth0" Dec 16 13:18:49.822035 containerd[1997]: 2025-12-16 13:18:49.767 [INFO][5014] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f" Namespace="kube-system" Pod="coredns-668d6bf9bc-9b4vq" WorkloadEndpoint="ip--172--31--26--5-k8s-coredns--668d6bf9bc--9b4vq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--5-k8s-coredns--668d6bf9bc--9b4vq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2dadeccb-8701-4c2b-8c6d-546d297b7e36", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 18, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-5", ContainerID:"", Pod:"coredns-668d6bf9bc-9b4vq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7683f8cbd7b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:18:49.822035 containerd[1997]: 2025-12-16 13:18:49.767 [INFO][5014] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.4/32] ContainerID="0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f" Namespace="kube-system" Pod="coredns-668d6bf9bc-9b4vq" WorkloadEndpoint="ip--172--31--26--5-k8s-coredns--668d6bf9bc--9b4vq-eth0" Dec 16 13:18:49.822035 containerd[1997]: 2025-12-16 13:18:49.767 [INFO][5014] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7683f8cbd7b ContainerID="0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f" Namespace="kube-system" Pod="coredns-668d6bf9bc-9b4vq" WorkloadEndpoint="ip--172--31--26--5-k8s-coredns--668d6bf9bc--9b4vq-eth0" Dec 16 13:18:49.822035 containerd[1997]: 2025-12-16 13:18:49.784 [INFO][5014] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f" Namespace="kube-system" Pod="coredns-668d6bf9bc-9b4vq" WorkloadEndpoint="ip--172--31--26--5-k8s-coredns--668d6bf9bc--9b4vq-eth0" Dec 16 13:18:49.822035 containerd[1997]: 2025-12-16 13:18:49.785 [INFO][5014] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f" Namespace="kube-system" Pod="coredns-668d6bf9bc-9b4vq" WorkloadEndpoint="ip--172--31--26--5-k8s-coredns--668d6bf9bc--9b4vq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--5-k8s-coredns--668d6bf9bc--9b4vq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2dadeccb-8701-4c2b-8c6d-546d297b7e36", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 18, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-5", ContainerID:"0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f", Pod:"coredns-668d6bf9bc-9b4vq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7683f8cbd7b", MAC:"66:f6:e8:07:5c:80", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:18:49.822035 containerd[1997]: 2025-12-16 13:18:49.811 [INFO][5014] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f" Namespace="kube-system" Pod="coredns-668d6bf9bc-9b4vq" WorkloadEndpoint="ip--172--31--26--5-k8s-coredns--668d6bf9bc--9b4vq-eth0" Dec 16 13:18:49.893448 containerd[1997]: time="2025-12-16T13:18:49.893397844Z" level=info msg="connecting to shim 0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f" address="unix:///run/containerd/s/5962b5dffcac0a16b8044e5489389a01bd6b7d7e8f6d95d4fc77a80a5225d605" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:18:49.947960 systemd-networkd[1860]: calie01b2573cb9: Link UP Dec 16 13:18:49.949316 systemd-networkd[1860]: calie01b2573cb9: Gained carrier Dec 16 13:18:49.975354 systemd[1]: Started cri-containerd-0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f.scope - libcontainer container 0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f. Dec 16 13:18:49.992627 containerd[1997]: 2025-12-16 13:18:49.665 [INFO][5015] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:18:49.992627 containerd[1997]: 2025-12-16 13:18:49.682 [INFO][5015] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--5-k8s-goldmane--666569f655--kcbpt-eth0 goldmane-666569f655- calico-system fa364a9d-bf3b-4e47-9ccb-93bdce84b381 808 0 2025-12-16 13:18:21 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-26-5 goldmane-666569f655-kcbpt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie01b2573cb9 [] [] }} ContainerID="1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33" Namespace="calico-system" Pod="goldmane-666569f655-kcbpt" WorkloadEndpoint="ip--172--31--26--5-k8s-goldmane--666569f655--kcbpt-" Dec 16 13:18:49.992627 containerd[1997]: 2025-12-16 13:18:49.682 [INFO][5015] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33" Namespace="calico-system" Pod="goldmane-666569f655-kcbpt" WorkloadEndpoint="ip--172--31--26--5-k8s-goldmane--666569f655--kcbpt-eth0" Dec 16 13:18:49.992627 containerd[1997]: 2025-12-16 13:18:49.739 [INFO][5048] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33" HandleID="k8s-pod-network.1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33" Workload="ip--172--31--26--5-k8s-goldmane--666569f655--kcbpt-eth0" Dec 16 13:18:49.992627 containerd[1997]: 2025-12-16 13:18:49.739 [INFO][5048] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33" HandleID="k8s-pod-network.1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33" Workload="ip--172--31--26--5-k8s-goldmane--666569f655--kcbpt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f0b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-5", "pod":"goldmane-666569f655-kcbpt", "timestamp":"2025-12-16 13:18:49.739161575 +0000 UTC"}, Hostname:"ip-172-31-26-5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:18:49.992627 containerd[1997]: 2025-12-16 13:18:49.739 [INFO][5048] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:18:49.992627 containerd[1997]: 2025-12-16 13:18:49.763 [INFO][5048] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:18:49.992627 containerd[1997]: 2025-12-16 13:18:49.763 [INFO][5048] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-5' Dec 16 13:18:49.992627 containerd[1997]: 2025-12-16 13:18:49.810 [INFO][5048] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33" host="ip-172-31-26-5" Dec 16 13:18:49.992627 containerd[1997]: 2025-12-16 13:18:49.818 [INFO][5048] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-5" Dec 16 13:18:49.992627 containerd[1997]: 2025-12-16 13:18:49.832 [INFO][5048] ipam/ipam.go 511: Trying affinity for 192.168.82.0/26 host="ip-172-31-26-5" Dec 16 13:18:49.992627 containerd[1997]: 2025-12-16 13:18:49.838 [INFO][5048] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.0/26 host="ip-172-31-26-5" Dec 16 13:18:49.992627 containerd[1997]: 2025-12-16 13:18:49.851 [INFO][5048] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.0/26 host="ip-172-31-26-5" Dec 16 13:18:49.992627 containerd[1997]: 2025-12-16 13:18:49.853 [INFO][5048] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.82.0/26 handle="k8s-pod-network.1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33" host="ip-172-31-26-5" Dec 16 13:18:49.992627 containerd[1997]: 2025-12-16 13:18:49.862 [INFO][5048] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33 Dec 16 13:18:49.992627 containerd[1997]: 2025-12-16 13:18:49.885 [INFO][5048] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.82.0/26 handle="k8s-pod-network.1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33" host="ip-172-31-26-5" Dec 16 13:18:49.992627 containerd[1997]: 2025-12-16 13:18:49.915 [INFO][5048] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.82.5/26] block=192.168.82.0/26 handle="k8s-pod-network.1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33" host="ip-172-31-26-5" Dec 16 13:18:49.992627 containerd[1997]: 2025-12-16 13:18:49.918 [INFO][5048] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.5/26] handle="k8s-pod-network.1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33" host="ip-172-31-26-5" Dec 16 13:18:49.992627 containerd[1997]: 2025-12-16 13:18:49.919 [INFO][5048] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:18:49.992627 containerd[1997]: 2025-12-16 13:18:49.919 [INFO][5048] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.82.5/26] IPv6=[] ContainerID="1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33" HandleID="k8s-pod-network.1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33" Workload="ip--172--31--26--5-k8s-goldmane--666569f655--kcbpt-eth0" Dec 16 13:18:49.994167 containerd[1997]: 2025-12-16 13:18:49.931 [INFO][5015] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33" Namespace="calico-system" Pod="goldmane-666569f655-kcbpt" WorkloadEndpoint="ip--172--31--26--5-k8s-goldmane--666569f655--kcbpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--5-k8s-goldmane--666569f655--kcbpt-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"fa364a9d-bf3b-4e47-9ccb-93bdce84b381", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 18, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-5", ContainerID:"", Pod:"goldmane-666569f655-kcbpt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie01b2573cb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:18:49.994167 containerd[1997]: 2025-12-16 13:18:49.931 [INFO][5015] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.5/32] ContainerID="1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33" Namespace="calico-system" Pod="goldmane-666569f655-kcbpt" WorkloadEndpoint="ip--172--31--26--5-k8s-goldmane--666569f655--kcbpt-eth0" Dec 16 13:18:49.994167 containerd[1997]: 2025-12-16 13:18:49.931 [INFO][5015] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie01b2573cb9 ContainerID="1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33" Namespace="calico-system" Pod="goldmane-666569f655-kcbpt" WorkloadEndpoint="ip--172--31--26--5-k8s-goldmane--666569f655--kcbpt-eth0" Dec 16 13:18:49.994167 containerd[1997]: 2025-12-16 13:18:49.954 [INFO][5015] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33" Namespace="calico-system" Pod="goldmane-666569f655-kcbpt" WorkloadEndpoint="ip--172--31--26--5-k8s-goldmane--666569f655--kcbpt-eth0" Dec 16 13:18:49.994167 containerd[1997]: 2025-12-16 13:18:49.956 [INFO][5015] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33" Namespace="calico-system" Pod="goldmane-666569f655-kcbpt" WorkloadEndpoint="ip--172--31--26--5-k8s-goldmane--666569f655--kcbpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--5-k8s-goldmane--666569f655--kcbpt-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"fa364a9d-bf3b-4e47-9ccb-93bdce84b381", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 18, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-5", ContainerID:"1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33", Pod:"goldmane-666569f655-kcbpt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie01b2573cb9", MAC:"1e:5f:3f:a5:de:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:18:49.994167 containerd[1997]: 2025-12-16 13:18:49.989 [INFO][5015] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33" Namespace="calico-system" Pod="goldmane-666569f655-kcbpt" WorkloadEndpoint="ip--172--31--26--5-k8s-goldmane--666569f655--kcbpt-eth0" Dec 16 13:18:50.044386 containerd[1997]: time="2025-12-16T13:18:50.044326542Z" level=info msg="connecting to shim 1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33" address="unix:///run/containerd/s/8247436d563be492a60ad67875ced99a703390c55fd2e69ba39e550df95a0e6a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:18:50.082357 systemd[1]: Started cri-containerd-1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33.scope - libcontainer container 1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33. Dec 16 13:18:50.127016 containerd[1997]: time="2025-12-16T13:18:50.126973463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9b4vq,Uid:2dadeccb-8701-4c2b-8c6d-546d297b7e36,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f\"" Dec 16 13:18:50.134723 containerd[1997]: time="2025-12-16T13:18:50.134684307Z" level=info msg="CreateContainer within sandbox \"0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:18:50.164598 containerd[1997]: time="2025-12-16T13:18:50.164482290Z" level=info msg="Container bc0ebc313620dfaee303f1ff7111f0fc79c3c946782c34e4562dacd573bbde6b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:18:50.185234 containerd[1997]: time="2025-12-16T13:18:50.185183380Z" level=info msg="CreateContainer within sandbox \"0d09e18e0736ed00469a237548f1057cf69502d5e173135708240f64d956761f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc0ebc313620dfaee303f1ff7111f0fc79c3c946782c34e4562dacd573bbde6b\"" Dec 16 13:18:50.187337 containerd[1997]: time="2025-12-16T13:18:50.187293192Z" level=info msg="StartContainer for \"bc0ebc313620dfaee303f1ff7111f0fc79c3c946782c34e4562dacd573bbde6b\"" Dec 16 13:18:50.190954 containerd[1997]: time="2025-12-16T13:18:50.190914288Z" level=info msg="connecting to shim bc0ebc313620dfaee303f1ff7111f0fc79c3c946782c34e4562dacd573bbde6b" address="unix:///run/containerd/s/5962b5dffcac0a16b8044e5489389a01bd6b7d7e8f6d95d4fc77a80a5225d605" protocol=ttrpc version=3 Dec 16 13:18:50.197341 containerd[1997]: time="2025-12-16T13:18:50.197297580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kcbpt,Uid:fa364a9d-bf3b-4e47-9ccb-93bdce84b381,Namespace:calico-system,Attempt:0,} returns sandbox id \"1b8dd393aa4a77835381648b0f78c94347a6dbd2b81f79837d8032dccce8de33\"" Dec 16 13:18:50.201667 containerd[1997]: time="2025-12-16T13:18:50.201404878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:18:50.233281 systemd[1]: Started cri-containerd-bc0ebc313620dfaee303f1ff7111f0fc79c3c946782c34e4562dacd573bbde6b.scope - libcontainer container bc0ebc313620dfaee303f1ff7111f0fc79c3c946782c34e4562dacd573bbde6b. Dec 16 13:18:50.290070 containerd[1997]: time="2025-12-16T13:18:50.290010939Z" level=info msg="StartContainer for \"bc0ebc313620dfaee303f1ff7111f0fc79c3c946782c34e4562dacd573bbde6b\" returns successfully" Dec 16 13:18:50.499112 containerd[1997]: time="2025-12-16T13:18:50.498894566Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:18:50.501463 containerd[1997]: time="2025-12-16T13:18:50.501338676Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:18:50.501463 containerd[1997]: time="2025-12-16T13:18:50.501430328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:18:50.501651 kubelet[3309]: E1216 13:18:50.501594 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:18:50.501932 kubelet[3309]: E1216 13:18:50.501657 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:18:50.501932 kubelet[3309]: E1216 13:18:50.501801 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbvqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kcbpt_calico-system(fa364a9d-bf3b-4e47-9ccb-93bdce84b381): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:18:50.503224 kubelet[3309]: E1216 13:18:50.503143 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kcbpt" podUID="fa364a9d-bf3b-4e47-9ccb-93bdce84b381" Dec 16 13:18:50.573066 containerd[1997]: time="2025-12-16T13:18:50.572983416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-99pm5,Uid:0841f344-51c4-439d-ae21-35f7829c767b,Namespace:kube-system,Attempt:0,}" Dec 16 13:18:50.573772 containerd[1997]: time="2025-12-16T13:18:50.573742969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p2jzr,Uid:0d725eaf-63cb-4894-b5fe-56fa81c91e00,Namespace:calico-system,Attempt:0,}" Dec 16 13:18:50.574483 containerd[1997]: time="2025-12-16T13:18:50.573287761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c88dfcf94-hn4df,Uid:f26b9dbd-da7d-4346-92a8-22a4fd24016d,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:18:50.895851 systemd[1]: Started sshd@8-172.31.26.5:22-139.178.68.195:34492.service - OpenSSH per-connection server daemon (139.178.68.195:34492). Dec 16 13:18:51.209334 sshd[5257]: Accepted publickey for core from 139.178.68.195 port 34492 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:18:51.220303 sshd-session[5257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:18:51.229693 systemd-networkd[1860]: calibebf2094468: Link UP Dec 16 13:18:51.229997 systemd-networkd[1860]: calibebf2094468: Gained carrier Dec 16 13:18:51.240636 systemd-logind[1972]: New session 8 of user core. Dec 16 13:18:51.248641 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:18:51.272553 kubelet[3309]: E1216 13:18:51.272280 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kcbpt" podUID="fa364a9d-bf3b-4e47-9ccb-93bdce84b381" Dec 16 13:18:51.299292 containerd[1997]: 2025-12-16 13:18:50.732 [INFO][5197] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:18:51.299292 containerd[1997]: 2025-12-16 13:18:50.815 [INFO][5197] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--5-k8s-coredns--668d6bf9bc--99pm5-eth0 coredns-668d6bf9bc- kube-system 0841f344-51c4-439d-ae21-35f7829c767b 799 0 2025-12-16 13:18:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-5 coredns-668d6bf9bc-99pm5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibebf2094468 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585" Namespace="kube-system" Pod="coredns-668d6bf9bc-99pm5" WorkloadEndpoint="ip--172--31--26--5-k8s-coredns--668d6bf9bc--99pm5-" Dec 16 13:18:51.299292 containerd[1997]: 2025-12-16 13:18:50.815 [INFO][5197] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585" Namespace="kube-system" Pod="coredns-668d6bf9bc-99pm5" WorkloadEndpoint="ip--172--31--26--5-k8s-coredns--668d6bf9bc--99pm5-eth0" Dec 16 13:18:51.299292 containerd[1997]: 2025-12-16 13:18:51.043 [INFO][5251] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585" HandleID="k8s-pod-network.4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585" Workload="ip--172--31--26--5-k8s-coredns--668d6bf9bc--99pm5-eth0" Dec 16 13:18:51.299292 containerd[1997]: 2025-12-16 13:18:51.043 [INFO][5251] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585" HandleID="k8s-pod-network.4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585" Workload="ip--172--31--26--5-k8s-coredns--668d6bf9bc--99pm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ef30), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-5", "pod":"coredns-668d6bf9bc-99pm5", "timestamp":"2025-12-16 13:18:51.043694583 +0000 UTC"}, Hostname:"ip-172-31-26-5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:18:51.299292 containerd[1997]: 2025-12-16 13:18:51.043 [INFO][5251] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:18:51.299292 containerd[1997]: 2025-12-16 13:18:51.044 [INFO][5251] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:18:51.299292 containerd[1997]: 2025-12-16 13:18:51.044 [INFO][5251] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-5' Dec 16 13:18:51.299292 containerd[1997]: 2025-12-16 13:18:51.124 [INFO][5251] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585" host="ip-172-31-26-5" Dec 16 13:18:51.299292 containerd[1997]: 2025-12-16 13:18:51.136 [INFO][5251] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-5" Dec 16 13:18:51.299292 containerd[1997]: 2025-12-16 13:18:51.148 [INFO][5251] ipam/ipam.go 511: Trying affinity for 192.168.82.0/26 host="ip-172-31-26-5" Dec 16 13:18:51.299292 containerd[1997]: 2025-12-16 13:18:51.152 [INFO][5251] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.0/26 host="ip-172-31-26-5" Dec 16 13:18:51.299292 containerd[1997]: 2025-12-16 13:18:51.159 [INFO][5251] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.0/26 host="ip-172-31-26-5" Dec 16 13:18:51.299292 containerd[1997]: 2025-12-16 13:18:51.159 [INFO][5251] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.82.0/26 handle="k8s-pod-network.4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585" host="ip-172-31-26-5" Dec 16 13:18:51.299292 containerd[1997]: 2025-12-16 13:18:51.162 [INFO][5251] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585 Dec 16 13:18:51.299292 containerd[1997]: 2025-12-16 13:18:51.174 [INFO][5251] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.82.0/26 handle="k8s-pod-network.4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585" host="ip-172-31-26-5" Dec 16 13:18:51.299292 containerd[1997]: 2025-12-16 13:18:51.195 [INFO][5251] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.82.6/26] block=192.168.82.0/26 handle="k8s-pod-network.4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585" host="ip-172-31-26-5" Dec 16 13:18:51.299292 containerd[1997]: 2025-12-16 13:18:51.197 [INFO][5251] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.6/26] handle="k8s-pod-network.4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585" host="ip-172-31-26-5" Dec 16 13:18:51.299292 containerd[1997]: 2025-12-16 13:18:51.198 [INFO][5251] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:18:51.299292 containerd[1997]: 2025-12-16 13:18:51.198 [INFO][5251] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.82.6/26] IPv6=[] ContainerID="4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585" HandleID="k8s-pod-network.4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585" Workload="ip--172--31--26--5-k8s-coredns--668d6bf9bc--99pm5-eth0" Dec 16 13:18:51.302720 containerd[1997]: 2025-12-16 13:18:51.218 [INFO][5197] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585" Namespace="kube-system" Pod="coredns-668d6bf9bc-99pm5" WorkloadEndpoint="ip--172--31--26--5-k8s-coredns--668d6bf9bc--99pm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--5-k8s-coredns--668d6bf9bc--99pm5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0841f344-51c4-439d-ae21-35f7829c767b", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 18, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-5", ContainerID:"", Pod:"coredns-668d6bf9bc-99pm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibebf2094468", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:18:51.302720 containerd[1997]: 2025-12-16 13:18:51.218 [INFO][5197] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.6/32] ContainerID="4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585" Namespace="kube-system" Pod="coredns-668d6bf9bc-99pm5" WorkloadEndpoint="ip--172--31--26--5-k8s-coredns--668d6bf9bc--99pm5-eth0" Dec 16 13:18:51.302720 containerd[1997]: 2025-12-16 13:18:51.218 [INFO][5197] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibebf2094468 ContainerID="4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585" Namespace="kube-system" Pod="coredns-668d6bf9bc-99pm5" WorkloadEndpoint="ip--172--31--26--5-k8s-coredns--668d6bf9bc--99pm5-eth0" Dec 16 13:18:51.302720 containerd[1997]: 2025-12-16 13:18:51.222 [INFO][5197] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585" Namespace="kube-system" Pod="coredns-668d6bf9bc-99pm5" WorkloadEndpoint="ip--172--31--26--5-k8s-coredns--668d6bf9bc--99pm5-eth0" Dec 16 13:18:51.302720 containerd[1997]: 2025-12-16 13:18:51.222 [INFO][5197] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585" Namespace="kube-system" Pod="coredns-668d6bf9bc-99pm5" WorkloadEndpoint="ip--172--31--26--5-k8s-coredns--668d6bf9bc--99pm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--5-k8s-coredns--668d6bf9bc--99pm5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0841f344-51c4-439d-ae21-35f7829c767b", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 18, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-5", ContainerID:"4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585", Pod:"coredns-668d6bf9bc-99pm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibebf2094468", MAC:"4e:5d:dc:3b:52:e6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:18:51.302720 containerd[1997]: 2025-12-16 13:18:51.293 [INFO][5197] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585" Namespace="kube-system" Pod="coredns-668d6bf9bc-99pm5" WorkloadEndpoint="ip--172--31--26--5-k8s-coredns--668d6bf9bc--99pm5-eth0" Dec 16 13:18:51.416841 systemd-networkd[1860]: cali525a0fb64f1: Link UP Dec 16 13:18:51.426745 systemd-networkd[1860]: cali7683f8cbd7b: Gained IPv6LL Dec 16 13:18:51.432003 systemd-networkd[1860]: cali525a0fb64f1: Gained carrier Dec 16 13:18:51.494084 kubelet[3309]: I1216 13:18:51.493566 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9b4vq" podStartSLOduration=46.458917554 podStartE2EDuration="46.458917554s" podCreationTimestamp="2025-12-16 13:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:18:51.398560091 +0000 UTC m=+50.324174294" watchObservedRunningTime="2025-12-16 13:18:51.458917554 +0000 UTC m=+50.384531757" Dec 16 13:18:51.498540 containerd[1997]: time="2025-12-16T13:18:51.497885294Z" level=info msg="connecting to shim 4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585" address="unix:///run/containerd/s/49213d3ebdd6891b05990bb9aa51757b17efe21afdd5e86b3ca14b458471f68d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:18:51.576075 containerd[1997]: 2025-12-16 13:18:50.758 [INFO][5200] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:18:51.576075 containerd[1997]: 2025-12-16 13:18:50.856 [INFO][5200] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--5-k8s-csi--node--driver--p2jzr-eth0 csi-node-driver- calico-system 0d725eaf-63cb-4894-b5fe-56fa81c91e00 694 0 2025-12-16 13:18:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-26-5 csi-node-driver-p2jzr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali525a0fb64f1 [] [] }} ContainerID="8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93" Namespace="calico-system" Pod="csi-node-driver-p2jzr" WorkloadEndpoint="ip--172--31--26--5-k8s-csi--node--driver--p2jzr-" Dec 16 13:18:51.576075 containerd[1997]: 2025-12-16 13:18:50.856 [INFO][5200] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93" Namespace="calico-system" Pod="csi-node-driver-p2jzr" WorkloadEndpoint="ip--172--31--26--5-k8s-csi--node--driver--p2jzr-eth0" Dec 16 13:18:51.576075 containerd[1997]: 2025-12-16 13:18:51.080 [INFO][5261] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93" HandleID="k8s-pod-network.8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93" Workload="ip--172--31--26--5-k8s-csi--node--driver--p2jzr-eth0" Dec 16 13:18:51.576075 containerd[1997]: 2025-12-16 13:18:51.082 [INFO][5261] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93" HandleID="k8s-pod-network.8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93" Workload="ip--172--31--26--5-k8s-csi--node--driver--p2jzr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000313db0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-5", "pod":"csi-node-driver-p2jzr", "timestamp":"2025-12-16 13:18:51.080414831 +0000 UTC"}, Hostname:"ip-172-31-26-5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:18:51.576075 containerd[1997]: 2025-12-16 13:18:51.082 [INFO][5261] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:18:51.576075 containerd[1997]: 2025-12-16 13:18:51.203 [INFO][5261] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:18:51.576075 containerd[1997]: 2025-12-16 13:18:51.210 [INFO][5261] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-5' Dec 16 13:18:51.576075 containerd[1997]: 2025-12-16 13:18:51.292 [INFO][5261] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93" host="ip-172-31-26-5" Dec 16 13:18:51.576075 containerd[1997]: 2025-12-16 13:18:51.311 [INFO][5261] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-5" Dec 16 13:18:51.576075 containerd[1997]: 2025-12-16 13:18:51.326 [INFO][5261] ipam/ipam.go 511: Trying affinity for 192.168.82.0/26 host="ip-172-31-26-5" Dec 16 13:18:51.576075 containerd[1997]: 2025-12-16 13:18:51.332 [INFO][5261] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.0/26 host="ip-172-31-26-5" Dec 16 13:18:51.576075 containerd[1997]: 2025-12-16 13:18:51.336 [INFO][5261] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.0/26 host="ip-172-31-26-5" Dec 16 13:18:51.576075 containerd[1997]: 2025-12-16 13:18:51.336 [INFO][5261] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.82.0/26 handle="k8s-pod-network.8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93" host="ip-172-31-26-5" Dec 16 13:18:51.576075 containerd[1997]: 2025-12-16 13:18:51.341 [INFO][5261] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93 Dec 16 13:18:51.576075 containerd[1997]: 2025-12-16 13:18:51.365 [INFO][5261] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.82.0/26 handle="k8s-pod-network.8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93" host="ip-172-31-26-5" Dec 16 13:18:51.576075 containerd[1997]: 2025-12-16 13:18:51.383 [INFO][5261] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.82.7/26] block=192.168.82.0/26 handle="k8s-pod-network.8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93" host="ip-172-31-26-5" Dec 16 13:18:51.576075 containerd[1997]: 2025-12-16 13:18:51.384 [INFO][5261] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.7/26] handle="k8s-pod-network.8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93" host="ip-172-31-26-5" Dec 16 13:18:51.576075 containerd[1997]: 2025-12-16 13:18:51.384 [INFO][5261] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:18:51.576075 containerd[1997]: 2025-12-16 13:18:51.384 [INFO][5261] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.82.7/26] IPv6=[] ContainerID="8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93" HandleID="k8s-pod-network.8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93" Workload="ip--172--31--26--5-k8s-csi--node--driver--p2jzr-eth0" Dec 16 13:18:51.580432 containerd[1997]: 2025-12-16 13:18:51.397 [INFO][5200] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93" Namespace="calico-system" Pod="csi-node-driver-p2jzr" WorkloadEndpoint="ip--172--31--26--5-k8s-csi--node--driver--p2jzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--5-k8s-csi--node--driver--p2jzr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0d725eaf-63cb-4894-b5fe-56fa81c91e00", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 18, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-5", ContainerID:"", Pod:"csi-node-driver-p2jzr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali525a0fb64f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:18:51.580432 containerd[1997]: 2025-12-16 13:18:51.397 [INFO][5200] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.7/32] ContainerID="8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93" Namespace="calico-system" Pod="csi-node-driver-p2jzr" WorkloadEndpoint="ip--172--31--26--5-k8s-csi--node--driver--p2jzr-eth0" Dec 16 13:18:51.580432 containerd[1997]: 2025-12-16 13:18:51.397 [INFO][5200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali525a0fb64f1 ContainerID="8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93" Namespace="calico-system" Pod="csi-node-driver-p2jzr" WorkloadEndpoint="ip--172--31--26--5-k8s-csi--node--driver--p2jzr-eth0" Dec 16 13:18:51.580432 containerd[1997]: 2025-12-16 13:18:51.481 [INFO][5200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93" Namespace="calico-system" Pod="csi-node-driver-p2jzr" WorkloadEndpoint="ip--172--31--26--5-k8s-csi--node--driver--p2jzr-eth0" Dec 16 13:18:51.580432 containerd[1997]: 2025-12-16 13:18:51.484 [INFO][5200] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93" Namespace="calico-system" Pod="csi-node-driver-p2jzr" WorkloadEndpoint="ip--172--31--26--5-k8s-csi--node--driver--p2jzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--5-k8s-csi--node--driver--p2jzr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0d725eaf-63cb-4894-b5fe-56fa81c91e00", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 18, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-5", ContainerID:"8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93", Pod:"csi-node-driver-p2jzr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali525a0fb64f1", MAC:"3a:5f:be:ab:48:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:18:51.580432 containerd[1997]: 2025-12-16 13:18:51.559 [INFO][5200] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93" Namespace="calico-system" Pod="csi-node-driver-p2jzr" WorkloadEndpoint="ip--172--31--26--5-k8s-csi--node--driver--p2jzr-eth0" Dec 16 13:18:51.613002 systemd[1]: Started cri-containerd-4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585.scope - libcontainer container 4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585. Dec 16 13:18:51.681582 systemd-networkd[1860]: calie01b2573cb9: Gained IPv6LL Dec 16 13:18:51.690793 containerd[1997]: time="2025-12-16T13:18:51.690217342Z" level=info msg="connecting to shim 8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93" address="unix:///run/containerd/s/a6f3ca414612a4d5f47fc26a04cc17cfeb03d0941b9cb5c88470f59673a7ec67" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:18:51.799899 systemd-networkd[1860]: cali5cebc806d2a: Link UP Dec 16 13:18:51.804525 systemd-networkd[1860]: cali5cebc806d2a: Gained carrier Dec 16 13:18:51.813033 systemd[1]: Started cri-containerd-8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93.scope - libcontainer container 8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93. Dec 16 13:18:51.854352 containerd[1997]: 2025-12-16 13:18:50.768 [INFO][5206] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:18:51.854352 containerd[1997]: 2025-12-16 13:18:50.847 [INFO][5206] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--hn4df-eth0 calico-apiserver-7c88dfcf94- calico-apiserver f26b9dbd-da7d-4346-92a8-22a4fd24016d 807 0 2025-12-16 13:18:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c88dfcf94 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-5 calico-apiserver-7c88dfcf94-hn4df eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5cebc806d2a [] [] }} ContainerID="8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f" Namespace="calico-apiserver" Pod="calico-apiserver-7c88dfcf94-hn4df" WorkloadEndpoint="ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--hn4df-" Dec 16 13:18:51.854352 containerd[1997]: 2025-12-16 13:18:50.847 [INFO][5206] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f" Namespace="calico-apiserver" Pod="calico-apiserver-7c88dfcf94-hn4df" WorkloadEndpoint="ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--hn4df-eth0" Dec 16 13:18:51.854352 containerd[1997]: 2025-12-16 13:18:51.108 [INFO][5259] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f" HandleID="k8s-pod-network.8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f" Workload="ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--hn4df-eth0" Dec 16 13:18:51.854352 containerd[1997]: 2025-12-16 13:18:51.108 [INFO][5259] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f" HandleID="k8s-pod-network.8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f" Workload="ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--hn4df-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123af0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-5", "pod":"calico-apiserver-7c88dfcf94-hn4df", "timestamp":"2025-12-16 13:18:51.108264455 +0000 UTC"}, Hostname:"ip-172-31-26-5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:18:51.854352 containerd[1997]: 2025-12-16 13:18:51.108 [INFO][5259] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:18:51.854352 containerd[1997]: 2025-12-16 13:18:51.384 [INFO][5259] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:18:51.854352 containerd[1997]: 2025-12-16 13:18:51.385 [INFO][5259] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-5' Dec 16 13:18:51.854352 containerd[1997]: 2025-12-16 13:18:51.482 [INFO][5259] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f" host="ip-172-31-26-5" Dec 16 13:18:51.854352 containerd[1997]: 2025-12-16 13:18:51.625 [INFO][5259] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-26-5" Dec 16 13:18:51.854352 containerd[1997]: 2025-12-16 13:18:51.638 [INFO][5259] ipam/ipam.go 511: Trying affinity for 192.168.82.0/26 host="ip-172-31-26-5" Dec 16 13:18:51.854352 containerd[1997]: 2025-12-16 13:18:51.660 [INFO][5259] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.0/26 host="ip-172-31-26-5" Dec 16 13:18:51.854352 containerd[1997]: 2025-12-16 13:18:51.670 [INFO][5259] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.0/26 host="ip-172-31-26-5" Dec 16 13:18:51.854352 containerd[1997]: 2025-12-16 13:18:51.670 [INFO][5259] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.82.0/26 handle="k8s-pod-network.8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f" host="ip-172-31-26-5" Dec 16 13:18:51.854352 containerd[1997]: 2025-12-16 13:18:51.675 [INFO][5259] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f Dec 16 13:18:51.854352 containerd[1997]: 2025-12-16 13:18:51.693 [INFO][5259] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.82.0/26 handle="k8s-pod-network.8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f" host="ip-172-31-26-5" Dec 16 13:18:51.854352 containerd[1997]: 2025-12-16 13:18:51.747 [INFO][5259] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.82.8/26] block=192.168.82.0/26 handle="k8s-pod-network.8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f" host="ip-172-31-26-5" Dec 16 13:18:51.854352 containerd[1997]: 2025-12-16 13:18:51.748 [INFO][5259] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.8/26] handle="k8s-pod-network.8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f" host="ip-172-31-26-5" Dec 16 13:18:51.854352 containerd[1997]: 2025-12-16 13:18:51.748 [INFO][5259] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:18:51.854352 containerd[1997]: 2025-12-16 13:18:51.748 [INFO][5259] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.82.8/26] IPv6=[] ContainerID="8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f" HandleID="k8s-pod-network.8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f" Workload="ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--hn4df-eth0" Dec 16 13:18:51.856746 containerd[1997]: 2025-12-16 13:18:51.772 [INFO][5206] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f" Namespace="calico-apiserver" Pod="calico-apiserver-7c88dfcf94-hn4df" WorkloadEndpoint="ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--hn4df-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--hn4df-eth0", GenerateName:"calico-apiserver-7c88dfcf94-", Namespace:"calico-apiserver", SelfLink:"", UID:"f26b9dbd-da7d-4346-92a8-22a4fd24016d", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 18, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c88dfcf94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-5", ContainerID:"", Pod:"calico-apiserver-7c88dfcf94-hn4df", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5cebc806d2a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:18:51.856746 containerd[1997]: 2025-12-16 13:18:51.772 [INFO][5206] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.8/32] ContainerID="8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f" Namespace="calico-apiserver" Pod="calico-apiserver-7c88dfcf94-hn4df" WorkloadEndpoint="ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--hn4df-eth0" Dec 16 13:18:51.856746 containerd[1997]: 2025-12-16 13:18:51.773 [INFO][5206] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5cebc806d2a ContainerID="8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f" Namespace="calico-apiserver" Pod="calico-apiserver-7c88dfcf94-hn4df" WorkloadEndpoint="ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--hn4df-eth0" Dec 16 13:18:51.856746 containerd[1997]: 2025-12-16 13:18:51.805 [INFO][5206] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f" Namespace="calico-apiserver" Pod="calico-apiserver-7c88dfcf94-hn4df" WorkloadEndpoint="ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--hn4df-eth0" Dec 16 13:18:51.856746 containerd[1997]: 2025-12-16 13:18:51.805 [INFO][5206] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f" Namespace="calico-apiserver" Pod="calico-apiserver-7c88dfcf94-hn4df" WorkloadEndpoint="ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--hn4df-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--hn4df-eth0", GenerateName:"calico-apiserver-7c88dfcf94-", Namespace:"calico-apiserver", SelfLink:"", UID:"f26b9dbd-da7d-4346-92a8-22a4fd24016d", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 18, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c88dfcf94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-5", ContainerID:"8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f", Pod:"calico-apiserver-7c88dfcf94-hn4df", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5cebc806d2a", MAC:"4e:5c:cf:5d:27:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:18:51.856746 containerd[1997]: 2025-12-16 13:18:51.843 [INFO][5206] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f" Namespace="calico-apiserver" Pod="calico-apiserver-7c88dfcf94-hn4df" WorkloadEndpoint="ip--172--31--26--5-k8s-calico--apiserver--7c88dfcf94--hn4df-eth0" Dec 16 13:18:51.926181 containerd[1997]: time="2025-12-16T13:18:51.924705008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-99pm5,Uid:0841f344-51c4-439d-ae21-35f7829c767b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585\"" Dec 16 13:18:51.985689 containerd[1997]: time="2025-12-16T13:18:51.985143994Z" level=info msg="connecting to shim 8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f" address="unix:///run/containerd/s/99c14a637779e04c75b932e8a866352f7c7aff16a352840c5118ffeb0af4c9b9" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:18:52.025084 containerd[1997]: time="2025-12-16T13:18:52.024580732Z" level=info msg="CreateContainer within sandbox \"4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:18:52.071130 containerd[1997]: time="2025-12-16T13:18:52.071002026Z" level=info msg="Container 99b4719098f2e46fffc2f2139c48e67c38d6c494b072b120aab1dea99dace583: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:18:52.080334 systemd[1]: Started cri-containerd-8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f.scope - libcontainer container 8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f. Dec 16 13:18:52.125136 containerd[1997]: time="2025-12-16T13:18:52.124958641Z" level=info msg="CreateContainer within sandbox \"4b249eeb9931ffbc13b27816a26be22825518d9cf4c396f789ebac1083532585\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"99b4719098f2e46fffc2f2139c48e67c38d6c494b072b120aab1dea99dace583\"" Dec 16 13:18:52.127197 containerd[1997]: time="2025-12-16T13:18:52.127097443Z" level=info msg="StartContainer for \"99b4719098f2e46fffc2f2139c48e67c38d6c494b072b120aab1dea99dace583\"" Dec 16 13:18:52.131092 containerd[1997]: time="2025-12-16T13:18:52.129405982Z" level=info msg="connecting to shim 99b4719098f2e46fffc2f2139c48e67c38d6c494b072b120aab1dea99dace583" address="unix:///run/containerd/s/49213d3ebdd6891b05990bb9aa51757b17efe21afdd5e86b3ca14b458471f68d" protocol=ttrpc version=3 Dec 16 13:18:52.186294 systemd[1]: Started cri-containerd-99b4719098f2e46fffc2f2139c48e67c38d6c494b072b120aab1dea99dace583.scope - libcontainer container 99b4719098f2e46fffc2f2139c48e67c38d6c494b072b120aab1dea99dace583. Dec 16 13:18:52.351944 containerd[1997]: time="2025-12-16T13:18:52.351826442Z" level=info msg="StartContainer for \"99b4719098f2e46fffc2f2139c48e67c38d6c494b072b120aab1dea99dace583\" returns successfully" Dec 16 13:18:52.494397 containerd[1997]: time="2025-12-16T13:18:52.494351904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c88dfcf94-hn4df,Uid:f26b9dbd-da7d-4346-92a8-22a4fd24016d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8dcae87ee08dbf4bba691ba94a22863f1a8ce2922fad40d3db4c326adf48f90f\"" Dec 16 13:18:52.496898 containerd[1997]: time="2025-12-16T13:18:52.496776931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:18:52.542370 sshd[5286]: Connection closed by 139.178.68.195 port 34492 Dec 16 13:18:52.543615 sshd-session[5257]: pam_unix(sshd:session): session closed for user core Dec 16 13:18:52.556670 systemd[1]: sshd@8-172.31.26.5:22-139.178.68.195:34492.service: Deactivated successfully. Dec 16 13:18:52.563147 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:18:52.566400 systemd-logind[1972]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:18:52.569275 systemd-logind[1972]: Removed session 8. Dec 16 13:18:52.635273 containerd[1997]: time="2025-12-16T13:18:52.635109967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p2jzr,Uid:0d725eaf-63cb-4894-b5fe-56fa81c91e00,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a11e4704ddeae14cf6d5d6a7df697d4baaa7a46e366c2d76f84f3b66d361c93\"" Dec 16 13:18:52.807534 containerd[1997]: time="2025-12-16T13:18:52.807479986Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:18:52.809632 containerd[1997]: time="2025-12-16T13:18:52.809572077Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:18:52.809927 containerd[1997]: time="2025-12-16T13:18:52.809665207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:18:52.810331 kubelet[3309]: E1216 13:18:52.810096 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:18:52.810331 kubelet[3309]: E1216 13:18:52.810156 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:18:52.811271 kubelet[3309]: E1216 13:18:52.810383 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55kdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c88dfcf94-hn4df_calico-apiserver(f26b9dbd-da7d-4346-92a8-22a4fd24016d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:18:52.811384 containerd[1997]: time="2025-12-16T13:18:52.811207971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:18:52.811574 kubelet[3309]: E1216 13:18:52.811488 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c88dfcf94-hn4df" podUID="f26b9dbd-da7d-4346-92a8-22a4fd24016d" Dec 16 13:18:52.833558 systemd-networkd[1860]: calibebf2094468: Gained IPv6LL Dec 16 13:18:52.834387 systemd-networkd[1860]: cali525a0fb64f1: Gained IPv6LL Dec 16 13:18:53.089484 containerd[1997]: time="2025-12-16T13:18:53.089237257Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:18:53.089793 systemd-networkd[1860]: cali5cebc806d2a: Gained IPv6LL Dec 16 13:18:53.093771 containerd[1997]: time="2025-12-16T13:18:53.093637635Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:18:53.093771 containerd[1997]: time="2025-12-16T13:18:53.093697628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:18:53.094389 kubelet[3309]: E1216 13:18:53.094110 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:18:53.094389 kubelet[3309]: E1216 13:18:53.094180 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:18:53.094389 kubelet[3309]: E1216 13:18:53.094333 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fbmzf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-p2jzr_calico-system(0d725eaf-63cb-4894-b5fe-56fa81c91e00): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:18:53.099613 containerd[1997]: time="2025-12-16T13:18:53.099554073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:18:53.299939 kubelet[3309]: E1216 13:18:53.299844 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c88dfcf94-hn4df" podUID="f26b9dbd-da7d-4346-92a8-22a4fd24016d" Dec 16 13:18:53.306615 (udev-worker)[4483]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:18:53.323556 systemd-networkd[1860]: vxlan.calico: Link UP Dec 16 13:18:53.323655 systemd-networkd[1860]: vxlan.calico: Gained carrier Dec 16 13:18:53.327485 kubelet[3309]: I1216 13:18:53.327393 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-99pm5" podStartSLOduration=48.327367704 podStartE2EDuration="48.327367704s" podCreationTimestamp="2025-12-16 13:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:18:53.32638498 +0000 UTC m=+52.251999183" watchObservedRunningTime="2025-12-16 13:18:53.327367704 +0000 UTC m=+52.252982165" Dec 16 13:18:53.387777 containerd[1997]: time="2025-12-16T13:18:53.387581429Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:18:53.389938 containerd[1997]: time="2025-12-16T13:18:53.389701635Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:18:53.389938 containerd[1997]: time="2025-12-16T13:18:53.389756525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:18:53.390268 kubelet[3309]: E1216 13:18:53.390045 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:18:53.390354 kubelet[3309]: E1216 13:18:53.390285 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:18:53.390877 kubelet[3309]: E1216 13:18:53.390820 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fbmzf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-p2jzr_calico-system(0d725eaf-63cb-4894-b5fe-56fa81c91e00): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:18:53.393189 kubelet[3309]: E1216 13:18:53.392256 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p2jzr" podUID="0d725eaf-63cb-4894-b5fe-56fa81c91e00" Dec 16 13:18:54.305362 kubelet[3309]: E1216 13:18:54.305271 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p2jzr" podUID="0d725eaf-63cb-4894-b5fe-56fa81c91e00" Dec 16 13:18:54.306195 kubelet[3309]: E1216 13:18:54.305689 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c88dfcf94-hn4df" podUID="f26b9dbd-da7d-4346-92a8-22a4fd24016d" Dec 16 13:18:55.073900 systemd-networkd[1860]: vxlan.calico: Gained IPv6LL Dec 16 13:18:57.578455 systemd[1]: Started sshd@9-172.31.26.5:22-139.178.68.195:34502.service - OpenSSH per-connection server daemon (139.178.68.195:34502). Dec 16 13:18:57.631520 ntpd[2184]: Listen normally on 6 vxlan.calico 192.168.82.0:123 Dec 16 13:18:57.633906 ntpd[2184]: 16 Dec 13:18:57 ntpd[2184]: Listen normally on 6 vxlan.calico 192.168.82.0:123 Dec 16 13:18:57.633906 ntpd[2184]: 16 Dec 13:18:57 ntpd[2184]: Listen normally on 7 cali68cfd16a5fc [fe80::ecee:eeff:feee:eeee%4]:123 Dec 16 13:18:57.633906 ntpd[2184]: 16 Dec 13:18:57 ntpd[2184]: Listen normally on 8 calif21c9082992 [fe80::ecee:eeff:feee:eeee%5]:123 Dec 16 13:18:57.633906 ntpd[2184]: 16 Dec 13:18:57 ntpd[2184]: Listen normally on 9 cali2e36abc1e74 [fe80::ecee:eeff:feee:eeee%6]:123 Dec 16 13:18:57.633906 ntpd[2184]: 16 Dec 13:18:57 ntpd[2184]: Listen normally on 10 cali7683f8cbd7b [fe80::ecee:eeff:feee:eeee%7]:123 Dec 16 13:18:57.633906 ntpd[2184]: 16 Dec 13:18:57 ntpd[2184]: Listen normally on 11 calie01b2573cb9 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 16 13:18:57.633906 ntpd[2184]: 16 Dec 13:18:57 ntpd[2184]: Listen normally on 12 calibebf2094468 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 16 13:18:57.633906 ntpd[2184]: 16 Dec 13:18:57 ntpd[2184]: Listen normally on 13 cali525a0fb64f1 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 13:18:57.633906 ntpd[2184]: 16 Dec 13:18:57 ntpd[2184]: Listen normally on 14 cali5cebc806d2a [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 13:18:57.633906 ntpd[2184]: 16 Dec 13:18:57 ntpd[2184]: Listen normally on 15 vxlan.calico [fe80::6467:e9ff:fe25:ca6f%12]:123 Dec 16 13:18:57.631570 ntpd[2184]: Listen normally on 7 cali68cfd16a5fc [fe80::ecee:eeff:feee:eeee%4]:123 Dec 16 13:18:57.631598 ntpd[2184]: Listen normally on 8 calif21c9082992 [fe80::ecee:eeff:feee:eeee%5]:123 Dec 16 13:18:57.631618 ntpd[2184]: Listen normally on 9 cali2e36abc1e74 [fe80::ecee:eeff:feee:eeee%6]:123 Dec 16 13:18:57.631637 ntpd[2184]: Listen normally on 10 cali7683f8cbd7b [fe80::ecee:eeff:feee:eeee%7]:123 Dec 16 13:18:57.631656 ntpd[2184]: Listen normally on 11 calie01b2573cb9 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 16 13:18:57.631675 ntpd[2184]: Listen normally on 12 calibebf2094468 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 16 13:18:57.631697 ntpd[2184]: Listen normally on 13 cali525a0fb64f1 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 13:18:57.631717 ntpd[2184]: Listen normally on 14 cali5cebc806d2a [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 13:18:57.631745 ntpd[2184]: Listen normally on 15 vxlan.calico [fe80::6467:e9ff:fe25:ca6f%12]:123 Dec 16 13:18:57.781500 sshd[5611]: Accepted publickey for core from 139.178.68.195 port 34502 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:18:57.784609 sshd-session[5611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:18:57.790835 systemd-logind[1972]: New session 9 of user core. Dec 16 13:18:57.795271 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:18:58.091135 sshd[5617]: Connection closed by 139.178.68.195 port 34502 Dec 16 13:18:58.091715 sshd-session[5611]: pam_unix(sshd:session): session closed for user core Dec 16 13:18:58.098172 systemd[1]: sshd@9-172.31.26.5:22-139.178.68.195:34502.service: Deactivated successfully. Dec 16 13:18:58.101682 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:18:58.104558 systemd-logind[1972]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:18:58.106325 systemd-logind[1972]: Removed session 9. Dec 16 13:18:59.578565 containerd[1997]: time="2025-12-16T13:18:59.578498547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:18:59.843320 containerd[1997]: time="2025-12-16T13:18:59.843199838Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:18:59.845395 containerd[1997]: time="2025-12-16T13:18:59.845346912Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:18:59.845532 containerd[1997]: time="2025-12-16T13:18:59.845357028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:18:59.845627 kubelet[3309]: E1216 13:18:59.845582 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:18:59.846000 kubelet[3309]: E1216 13:18:59.845636 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:18:59.846000 kubelet[3309]: E1216 13:18:59.845734 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3b88f196c3d5438da0becadb94577a42,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7pxf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6746fb9d97-xwbhb_calico-system(c5ba7f20-53de-469e-ab23-68e66f875f69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:18:59.849029 containerd[1997]: time="2025-12-16T13:18:59.849003423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:19:00.140605 containerd[1997]: time="2025-12-16T13:19:00.140557218Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:19:00.143078 containerd[1997]: time="2025-12-16T13:19:00.142990877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:19:00.143321 containerd[1997]: time="2025-12-16T13:19:00.142991681Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:19:00.143401 kubelet[3309]: E1216 13:19:00.143314 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:19:00.143401 kubelet[3309]: E1216 13:19:00.143375 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:19:00.143570 kubelet[3309]: E1216 13:19:00.143514 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7pxf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6746fb9d97-xwbhb_calico-system(c5ba7f20-53de-469e-ab23-68e66f875f69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:19:00.145109 kubelet[3309]: E1216 13:19:00.145039 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6746fb9d97-xwbhb" podUID="c5ba7f20-53de-469e-ab23-68e66f875f69" Dec 16 13:19:00.572844 containerd[1997]: time="2025-12-16T13:19:00.572527195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:19:00.872934 containerd[1997]: time="2025-12-16T13:19:00.872888254Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:19:00.875141 containerd[1997]: time="2025-12-16T13:19:00.875082331Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:19:00.875281 containerd[1997]: time="2025-12-16T13:19:00.875100390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:19:00.875386 kubelet[3309]: E1216 13:19:00.875343 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:19:00.875709 kubelet[3309]: E1216 13:19:00.875393 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:19:00.875709 kubelet[3309]: E1216 13:19:00.875629 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hx4qm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c88dfcf94-jsc58_calico-apiserver(51551839-7827-49fc-85bc-04ba08e6c0fe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:19:00.876877 kubelet[3309]: E1216 13:19:00.876811 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c88dfcf94-jsc58" podUID="51551839-7827-49fc-85bc-04ba08e6c0fe" Dec 16 13:19:01.647545 containerd[1997]: time="2025-12-16T13:19:01.647311686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:19:02.022009 containerd[1997]: time="2025-12-16T13:19:02.021151553Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:19:02.047994 containerd[1997]: time="2025-12-16T13:19:02.044161832Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:19:02.047994 containerd[1997]: time="2025-12-16T13:19:02.044263412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:19:02.069779 kubelet[3309]: E1216 13:19:02.065186 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:19:02.069779 kubelet[3309]: E1216 13:19:02.065258 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:19:02.069779 kubelet[3309]: E1216 13:19:02.065412 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8h5xj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5d6c9b69bb-zds8r_calico-system(5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:19:02.069779 kubelet[3309]: E1216 13:19:02.068747 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d6c9b69bb-zds8r" podUID="5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4" Dec 16 13:19:02.574697 containerd[1997]: time="2025-12-16T13:19:02.574639413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:19:02.895225 containerd[1997]: time="2025-12-16T13:19:02.894989535Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:19:02.901557 containerd[1997]: time="2025-12-16T13:19:02.901494133Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:19:02.901873 containerd[1997]: time="2025-12-16T13:19:02.901761570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:19:02.903123 kubelet[3309]: E1216 13:19:02.902310 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:19:02.903123 kubelet[3309]: E1216 13:19:02.902370 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:19:02.903123 kubelet[3309]: E1216 13:19:02.902558 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbvqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kcbpt_calico-system(fa364a9d-bf3b-4e47-9ccb-93bdce84b381): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:19:02.917189 kubelet[3309]: E1216 13:19:02.917134 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kcbpt" podUID="fa364a9d-bf3b-4e47-9ccb-93bdce84b381" Dec 16 13:19:03.137218 systemd[1]: Started sshd@10-172.31.26.5:22-139.178.68.195:60062.service - OpenSSH per-connection server daemon (139.178.68.195:60062). Dec 16 13:19:03.514351 sshd[5640]: Accepted publickey for core from 139.178.68.195 port 60062 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:19:03.517977 sshd-session[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:03.524753 systemd-logind[1972]: New session 10 of user core. Dec 16 13:19:03.534787 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:19:04.202062 sshd[5643]: Connection closed by 139.178.68.195 port 60062 Dec 16 13:19:04.207551 sshd-session[5640]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:04.256769 systemd[1]: sshd@10-172.31.26.5:22-139.178.68.195:60062.service: Deactivated successfully. Dec 16 13:19:04.260833 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:19:04.262383 systemd-logind[1972]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:19:04.266754 systemd[1]: Started sshd@11-172.31.26.5:22-139.178.68.195:60066.service - OpenSSH per-connection server daemon (139.178.68.195:60066). Dec 16 13:19:04.269385 systemd-logind[1972]: Removed session 10. Dec 16 13:19:04.495572 sshd[5664]: Accepted publickey for core from 139.178.68.195 port 60066 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:19:04.497912 sshd-session[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:04.519016 systemd-logind[1972]: New session 11 of user core. Dec 16 13:19:04.545698 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:19:04.913953 sshd[5667]: Connection closed by 139.178.68.195 port 60066 Dec 16 13:19:04.919368 sshd-session[5664]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:04.948986 systemd[1]: sshd@11-172.31.26.5:22-139.178.68.195:60066.service: Deactivated successfully. Dec 16 13:19:04.954489 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:19:04.958600 systemd-logind[1972]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:19:04.962527 systemd[1]: Started sshd@12-172.31.26.5:22-139.178.68.195:60072.service - OpenSSH per-connection server daemon (139.178.68.195:60072). Dec 16 13:19:04.966970 systemd-logind[1972]: Removed session 11. Dec 16 13:19:05.169849 sshd[5677]: Accepted publickey for core from 139.178.68.195 port 60072 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:19:05.173350 sshd-session[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:05.179963 systemd-logind[1972]: New session 12 of user core. Dec 16 13:19:05.184837 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:19:05.429730 sshd[5680]: Connection closed by 139.178.68.195 port 60072 Dec 16 13:19:05.430378 sshd-session[5677]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:05.435668 systemd[1]: sshd@12-172.31.26.5:22-139.178.68.195:60072.service: Deactivated successfully. Dec 16 13:19:05.438828 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:19:05.439906 systemd-logind[1972]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:19:05.441887 systemd-logind[1972]: Removed session 12. Dec 16 13:19:06.573940 containerd[1997]: time="2025-12-16T13:19:06.573652253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:19:06.822333 containerd[1997]: time="2025-12-16T13:19:06.822267563Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:19:06.824461 containerd[1997]: time="2025-12-16T13:19:06.824349232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:19:06.825088 containerd[1997]: time="2025-12-16T13:19:06.824741087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:19:06.825219 kubelet[3309]: E1216 13:19:06.825178 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:19:06.825219 kubelet[3309]: E1216 13:19:06.825229 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:19:06.825755 kubelet[3309]: E1216 13:19:06.825337 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55kdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c88dfcf94-hn4df_calico-apiserver(f26b9dbd-da7d-4346-92a8-22a4fd24016d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:19:06.826911 kubelet[3309]: E1216 13:19:06.826836 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c88dfcf94-hn4df" podUID="f26b9dbd-da7d-4346-92a8-22a4fd24016d" Dec 16 13:19:09.574093 containerd[1997]: time="2025-12-16T13:19:09.573854708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:19:09.879533 containerd[1997]: time="2025-12-16T13:19:09.879478308Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:19:09.881685 containerd[1997]: time="2025-12-16T13:19:09.881601614Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:19:09.881685 containerd[1997]: time="2025-12-16T13:19:09.881689725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:19:09.881865 kubelet[3309]: E1216 13:19:09.881830 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:19:09.882180 kubelet[3309]: E1216 13:19:09.881873 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:19:09.882180 kubelet[3309]: E1216 13:19:09.881988 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fbmzf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-p2jzr_calico-system(0d725eaf-63cb-4894-b5fe-56fa81c91e00): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:19:09.885671 containerd[1997]: time="2025-12-16T13:19:09.885610297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:19:10.184023 containerd[1997]: time="2025-12-16T13:19:10.183885271Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:19:10.186125 containerd[1997]: time="2025-12-16T13:19:10.185979342Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:19:10.186125 containerd[1997]: time="2025-12-16T13:19:10.186028825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:19:10.186423 kubelet[3309]: E1216 13:19:10.186378 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:19:10.186533 kubelet[3309]: E1216 13:19:10.186428 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:19:10.186582 kubelet[3309]: E1216 13:19:10.186534 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fbmzf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-p2jzr_calico-system(0d725eaf-63cb-4894-b5fe-56fa81c91e00): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:19:10.187789 kubelet[3309]: E1216 13:19:10.187740 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p2jzr" podUID="0d725eaf-63cb-4894-b5fe-56fa81c91e00" Dec 16 13:19:10.468607 systemd[1]: Started sshd@13-172.31.26.5:22-139.178.68.195:42866.service - OpenSSH per-connection server daemon (139.178.68.195:42866). Dec 16 13:19:10.647379 sshd[5697]: Accepted publickey for core from 139.178.68.195 port 42866 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:19:10.649016 sshd-session[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:10.653905 systemd-logind[1972]: New session 13 of user core. Dec 16 13:19:10.660399 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:19:10.855753 sshd[5700]: Connection closed by 139.178.68.195 port 42866 Dec 16 13:19:10.856564 sshd-session[5697]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:10.861256 systemd-logind[1972]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:19:10.861368 systemd[1]: sshd@13-172.31.26.5:22-139.178.68.195:42866.service: Deactivated successfully. Dec 16 13:19:10.863949 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:19:10.866116 systemd-logind[1972]: Removed session 13. Dec 16 13:19:13.580104 kubelet[3309]: E1216 13:19:13.579794 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c88dfcf94-jsc58" podUID="51551839-7827-49fc-85bc-04ba08e6c0fe" Dec 16 13:19:15.574988 kubelet[3309]: E1216 13:19:15.574180 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d6c9b69bb-zds8r" podUID="5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4" Dec 16 13:19:15.575778 kubelet[3309]: E1216 13:19:15.575731 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6746fb9d97-xwbhb" podUID="c5ba7f20-53de-469e-ab23-68e66f875f69" Dec 16 13:19:15.891577 systemd[1]: Started sshd@14-172.31.26.5:22-139.178.68.195:42882.service - OpenSSH per-connection server daemon (139.178.68.195:42882). Dec 16 13:19:16.123605 sshd[5744]: Accepted publickey for core from 139.178.68.195 port 42882 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:19:16.131543 sshd-session[5744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:16.139278 systemd-logind[1972]: New session 14 of user core. Dec 16 13:19:16.145319 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:19:16.440212 sshd[5747]: Connection closed by 139.178.68.195 port 42882 Dec 16 13:19:16.441282 sshd-session[5744]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:16.445960 systemd[1]: sshd@14-172.31.26.5:22-139.178.68.195:42882.service: Deactivated successfully. Dec 16 13:19:16.448860 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:19:16.450891 systemd-logind[1972]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:19:16.452938 systemd-logind[1972]: Removed session 14. Dec 16 13:19:16.573705 kubelet[3309]: E1216 13:19:16.573566 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kcbpt" podUID="fa364a9d-bf3b-4e47-9ccb-93bdce84b381" Dec 16 13:19:19.574076 kubelet[3309]: E1216 13:19:19.573479 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c88dfcf94-hn4df" podUID="f26b9dbd-da7d-4346-92a8-22a4fd24016d" Dec 16 13:19:20.575607 kubelet[3309]: E1216 13:19:20.575545 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p2jzr" podUID="0d725eaf-63cb-4894-b5fe-56fa81c91e00" Dec 16 13:19:21.475509 systemd[1]: Started sshd@15-172.31.26.5:22-139.178.68.195:38854.service - OpenSSH per-connection server daemon (139.178.68.195:38854). Dec 16 13:19:21.656128 sshd[5761]: Accepted publickey for core from 139.178.68.195 port 38854 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:19:21.657531 sshd-session[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:21.662977 systemd-logind[1972]: New session 15 of user core. Dec 16 13:19:21.669289 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:19:21.888437 sshd[5764]: Connection closed by 139.178.68.195 port 38854 Dec 16 13:19:21.889253 sshd-session[5761]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:21.894522 systemd[1]: sshd@15-172.31.26.5:22-139.178.68.195:38854.service: Deactivated successfully. Dec 16 13:19:21.897222 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:19:21.898807 systemd-logind[1972]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:19:21.900994 systemd-logind[1972]: Removed session 15. Dec 16 13:19:26.923262 systemd[1]: Started sshd@16-172.31.26.5:22-139.178.68.195:38862.service - OpenSSH per-connection server daemon (139.178.68.195:38862). Dec 16 13:19:27.150399 sshd[5777]: Accepted publickey for core from 139.178.68.195 port 38862 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:19:27.152667 sshd-session[5777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:27.161430 systemd-logind[1972]: New session 16 of user core. Dec 16 13:19:27.166309 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:19:27.499279 sshd[5780]: Connection closed by 139.178.68.195 port 38862 Dec 16 13:19:27.501885 sshd-session[5777]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:27.506815 systemd-logind[1972]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:19:27.507512 systemd[1]: sshd@16-172.31.26.5:22-139.178.68.195:38862.service: Deactivated successfully. Dec 16 13:19:27.509610 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:19:27.512285 systemd-logind[1972]: Removed session 16. Dec 16 13:19:27.538790 systemd[1]: Started sshd@17-172.31.26.5:22-139.178.68.195:38872.service - OpenSSH per-connection server daemon (139.178.68.195:38872). Dec 16 13:19:27.577600 containerd[1997]: time="2025-12-16T13:19:27.577353662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:19:27.724765 sshd[5792]: Accepted publickey for core from 139.178.68.195 port 38872 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:19:27.727617 sshd-session[5792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:27.734006 systemd-logind[1972]: New session 17 of user core. Dec 16 13:19:27.747427 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:19:27.841967 containerd[1997]: time="2025-12-16T13:19:27.841840959Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:19:27.844082 containerd[1997]: time="2025-12-16T13:19:27.843893638Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:19:27.844082 containerd[1997]: time="2025-12-16T13:19:27.844006517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:19:27.844941 containerd[1997]: time="2025-12-16T13:19:27.844820328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:19:27.845006 kubelet[3309]: E1216 13:19:27.844231 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:19:27.845006 kubelet[3309]: E1216 13:19:27.844299 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:19:27.845006 kubelet[3309]: E1216 13:19:27.844565 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3b88f196c3d5438da0becadb94577a42,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7pxf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6746fb9d97-xwbhb_calico-system(c5ba7f20-53de-469e-ab23-68e66f875f69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:19:28.140646 containerd[1997]: time="2025-12-16T13:19:28.140589929Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:19:28.142714 containerd[1997]: time="2025-12-16T13:19:28.142654640Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:19:28.142829 containerd[1997]: time="2025-12-16T13:19:28.142741146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:19:28.142926 kubelet[3309]: E1216 13:19:28.142888 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:19:28.143233 kubelet[3309]: E1216 13:19:28.142934 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:19:28.143233 kubelet[3309]: E1216 13:19:28.143170 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hx4qm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c88dfcf94-jsc58_calico-apiserver(51551839-7827-49fc-85bc-04ba08e6c0fe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:19:28.143394 containerd[1997]: time="2025-12-16T13:19:28.143321751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:19:28.144742 kubelet[3309]: E1216 13:19:28.144695 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c88dfcf94-jsc58" podUID="51551839-7827-49fc-85bc-04ba08e6c0fe" Dec 16 13:19:28.413153 sshd[5795]: Connection closed by 139.178.68.195 port 38872 Dec 16 13:19:28.415759 sshd-session[5792]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:28.426389 systemd[1]: sshd@17-172.31.26.5:22-139.178.68.195:38872.service: Deactivated successfully. Dec 16 13:19:28.429112 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:19:28.430656 systemd-logind[1972]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:19:28.432933 systemd-logind[1972]: Removed session 17. Dec 16 13:19:28.437257 containerd[1997]: time="2025-12-16T13:19:28.437087580Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:19:28.439501 containerd[1997]: time="2025-12-16T13:19:28.439326102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:19:28.439501 containerd[1997]: time="2025-12-16T13:19:28.439412563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:19:28.439817 kubelet[3309]: E1216 13:19:28.439624 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:19:28.440105 kubelet[3309]: E1216 13:19:28.439665 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:19:28.440249 kubelet[3309]: E1216 13:19:28.440199 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7pxf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6746fb9d97-xwbhb_calico-system(c5ba7f20-53de-469e-ab23-68e66f875f69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:19:28.441890 kubelet[3309]: E1216 13:19:28.441705 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6746fb9d97-xwbhb" podUID="c5ba7f20-53de-469e-ab23-68e66f875f69" Dec 16 13:19:28.449887 systemd[1]: Started sshd@18-172.31.26.5:22-139.178.68.195:38878.service - OpenSSH per-connection server daemon (139.178.68.195:38878). Dec 16 13:19:28.578840 containerd[1997]: time="2025-12-16T13:19:28.578771169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:19:28.689337 sshd[5805]: Accepted publickey for core from 139.178.68.195 port 38878 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:19:28.692340 sshd-session[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:28.698033 systemd-logind[1972]: New session 18 of user core. Dec 16 13:19:28.706328 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:19:28.857708 containerd[1997]: time="2025-12-16T13:19:28.857643406Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:19:28.860008 containerd[1997]: time="2025-12-16T13:19:28.859880008Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:19:28.860179 containerd[1997]: time="2025-12-16T13:19:28.860077329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:19:28.860306 kubelet[3309]: E1216 13:19:28.860265 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:19:28.860882 kubelet[3309]: E1216 13:19:28.860326 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:19:28.860882 kubelet[3309]: E1216 13:19:28.860492 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8h5xj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5d6c9b69bb-zds8r_calico-system(5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:19:28.862283 kubelet[3309]: E1216 13:19:28.862232 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d6c9b69bb-zds8r" podUID="5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4" Dec 16 13:19:29.582228 containerd[1997]: time="2025-12-16T13:19:29.582078807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:19:29.754312 sshd[5808]: Connection closed by 139.178.68.195 port 38878 Dec 16 13:19:29.756459 sshd-session[5805]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:29.767332 systemd-logind[1972]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:19:29.768580 systemd[1]: sshd@18-172.31.26.5:22-139.178.68.195:38878.service: Deactivated successfully. Dec 16 13:19:29.772338 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:19:29.776915 systemd-logind[1972]: Removed session 18. Dec 16 13:19:29.795373 systemd[1]: Started sshd@19-172.31.26.5:22-139.178.68.195:38894.service - OpenSSH per-connection server daemon (139.178.68.195:38894). Dec 16 13:19:29.866840 containerd[1997]: time="2025-12-16T13:19:29.866796572Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:19:29.869314 containerd[1997]: time="2025-12-16T13:19:29.869246657Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:19:29.869422 containerd[1997]: time="2025-12-16T13:19:29.869338626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:19:29.869554 kubelet[3309]: E1216 13:19:29.869509 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:19:29.869874 kubelet[3309]: E1216 13:19:29.869562 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:19:29.869874 kubelet[3309]: E1216 13:19:29.869698 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbvqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kcbpt_calico-system(fa364a9d-bf3b-4e47-9ccb-93bdce84b381): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:19:29.870947 kubelet[3309]: E1216 13:19:29.870897 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kcbpt" podUID="fa364a9d-bf3b-4e47-9ccb-93bdce84b381" Dec 16 13:19:29.984770 sshd[5829]: Accepted publickey for core from 139.178.68.195 port 38894 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:19:29.988233 sshd-session[5829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:29.994185 systemd-logind[1972]: New session 19 of user core. Dec 16 13:19:30.000349 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:19:30.574937 containerd[1997]: time="2025-12-16T13:19:30.574890058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:19:30.672358 sshd[5832]: Connection closed by 139.178.68.195 port 38894 Dec 16 13:19:30.673271 sshd-session[5829]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:30.681498 systemd[1]: sshd@19-172.31.26.5:22-139.178.68.195:38894.service: Deactivated successfully. Dec 16 13:19:30.687997 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:19:30.692398 systemd-logind[1972]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:19:30.709929 systemd-logind[1972]: Removed session 19. Dec 16 13:19:30.710783 systemd[1]: Started sshd@20-172.31.26.5:22-139.178.68.195:40286.service - OpenSSH per-connection server daemon (139.178.68.195:40286). Dec 16 13:19:30.858129 containerd[1997]: time="2025-12-16T13:19:30.858076561Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:19:30.860678 containerd[1997]: time="2025-12-16T13:19:30.860605985Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:19:30.860806 containerd[1997]: time="2025-12-16T13:19:30.860695569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:19:30.860893 kubelet[3309]: E1216 13:19:30.860839 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:19:30.860893 kubelet[3309]: E1216 13:19:30.860889 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:19:30.861314 kubelet[3309]: E1216 13:19:30.861040 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55kdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c88dfcf94-hn4df_calico-apiserver(f26b9dbd-da7d-4346-92a8-22a4fd24016d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:19:30.862320 kubelet[3309]: E1216 13:19:30.862269 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c88dfcf94-hn4df" podUID="f26b9dbd-da7d-4346-92a8-22a4fd24016d" Dec 16 13:19:30.906028 sshd[5841]: Accepted publickey for core from 139.178.68.195 port 40286 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:19:30.906674 sshd-session[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:30.912935 systemd-logind[1972]: New session 20 of user core. Dec 16 13:19:30.919610 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:19:31.137833 sshd[5844]: Connection closed by 139.178.68.195 port 40286 Dec 16 13:19:31.138818 sshd-session[5841]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:31.143288 systemd[1]: sshd@20-172.31.26.5:22-139.178.68.195:40286.service: Deactivated successfully. Dec 16 13:19:31.145941 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:19:31.146788 systemd-logind[1972]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:19:31.148731 systemd-logind[1972]: Removed session 20. Dec 16 13:19:32.573840 containerd[1997]: time="2025-12-16T13:19:32.573589893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:19:32.848449 containerd[1997]: time="2025-12-16T13:19:32.848119645Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:19:32.850435 containerd[1997]: time="2025-12-16T13:19:32.850293001Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:19:32.854313 containerd[1997]: time="2025-12-16T13:19:32.850781471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:19:32.861111 kubelet[3309]: E1216 13:19:32.859555 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:19:32.861111 kubelet[3309]: E1216 13:19:32.859633 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:19:32.861111 kubelet[3309]: E1216 13:19:32.859774 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fbmzf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-p2jzr_calico-system(0d725eaf-63cb-4894-b5fe-56fa81c91e00): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:19:32.877303 containerd[1997]: time="2025-12-16T13:19:32.876817016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:19:33.167419 containerd[1997]: time="2025-12-16T13:19:33.167279676Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:19:33.169494 containerd[1997]: time="2025-12-16T13:19:33.169449235Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:19:33.169609 containerd[1997]: time="2025-12-16T13:19:33.169535634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:19:33.169818 kubelet[3309]: E1216 13:19:33.169742 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:19:33.169818 kubelet[3309]: E1216 13:19:33.169789 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:19:33.169969 kubelet[3309]: E1216 13:19:33.169897 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fbmzf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-p2jzr_calico-system(0d725eaf-63cb-4894-b5fe-56fa81c91e00): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:19:33.171336 kubelet[3309]: E1216 13:19:33.171270 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p2jzr" podUID="0d725eaf-63cb-4894-b5fe-56fa81c91e00" Dec 16 13:19:36.173192 systemd[1]: Started sshd@21-172.31.26.5:22-139.178.68.195:40296.service - OpenSSH per-connection server daemon (139.178.68.195:40296). Dec 16 13:19:36.417789 sshd[5867]: Accepted publickey for core from 139.178.68.195 port 40296 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:19:36.421481 sshd-session[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:36.428583 systemd-logind[1972]: New session 21 of user core. Dec 16 13:19:36.433416 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 13:19:37.009358 sshd[5872]: Connection closed by 139.178.68.195 port 40296 Dec 16 13:19:37.010404 sshd-session[5867]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:37.015235 systemd[1]: sshd@21-172.31.26.5:22-139.178.68.195:40296.service: Deactivated successfully. Dec 16 13:19:37.018070 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 13:19:37.020356 systemd-logind[1972]: Session 21 logged out. Waiting for processes to exit. Dec 16 13:19:37.022034 systemd-logind[1972]: Removed session 21. Dec 16 13:19:39.915687 update_engine[1975]: I20251216 13:19:39.915601 1975 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 16 13:19:39.915687 update_engine[1975]: I20251216 13:19:39.915666 1975 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 16 13:19:39.918151 update_engine[1975]: I20251216 13:19:39.918107 1975 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 16 13:19:39.920964 update_engine[1975]: I20251216 13:19:39.920601 1975 omaha_request_params.cc:62] Current group set to stable Dec 16 13:19:39.920964 update_engine[1975]: I20251216 13:19:39.920769 1975 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 16 13:19:39.920964 update_engine[1975]: I20251216 13:19:39.920785 1975 update_attempter.cc:643] Scheduling an action processor start. Dec 16 13:19:39.920964 update_engine[1975]: I20251216 13:19:39.920809 1975 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 16 13:19:39.920964 update_engine[1975]: I20251216 13:19:39.920864 1975 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 16 13:19:39.921254 update_engine[1975]: I20251216 13:19:39.921139 1975 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 16 13:19:39.921254 update_engine[1975]: I20251216 13:19:39.921157 1975 omaha_request_action.cc:272] Request: Dec 16 13:19:39.921254 update_engine[1975]: Dec 16 13:19:39.921254 update_engine[1975]: Dec 16 13:19:39.921254 update_engine[1975]: Dec 16 13:19:39.921254 update_engine[1975]: Dec 16 13:19:39.921254 update_engine[1975]: Dec 16 13:19:39.921254 update_engine[1975]: Dec 16 13:19:39.921254 update_engine[1975]: Dec 16 13:19:39.921254 update_engine[1975]: Dec 16 13:19:39.921254 update_engine[1975]: I20251216 13:19:39.921167 1975 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:19:39.937088 update_engine[1975]: I20251216 13:19:39.936859 1975 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:19:39.939382 update_engine[1975]: I20251216 13:19:39.939320 1975 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:19:39.965428 locksmithd[2036]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 16 13:19:39.967789 update_engine[1975]: E20251216 13:19:39.967721 1975 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 16 13:19:39.968166 update_engine[1975]: I20251216 13:19:39.967849 1975 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 16 13:19:40.575359 kubelet[3309]: E1216 13:19:40.574795 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c88dfcf94-jsc58" podUID="51551839-7827-49fc-85bc-04ba08e6c0fe" Dec 16 13:19:40.578064 kubelet[3309]: E1216 13:19:40.577995 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6746fb9d97-xwbhb" podUID="c5ba7f20-53de-469e-ab23-68e66f875f69" Dec 16 13:19:42.051443 systemd[1]: Started sshd@22-172.31.26.5:22-139.178.68.195:56036.service - OpenSSH per-connection server daemon (139.178.68.195:56036). Dec 16 13:19:42.318897 sshd[5883]: Accepted publickey for core from 139.178.68.195 port 56036 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:19:42.322257 sshd-session[5883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:42.331867 systemd-logind[1972]: New session 22 of user core. Dec 16 13:19:42.340637 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 13:19:42.575730 kubelet[3309]: E1216 13:19:42.575612 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d6c9b69bb-zds8r" podUID="5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4" Dec 16 13:19:42.804079 sshd[5886]: Connection closed by 139.178.68.195 port 56036 Dec 16 13:19:42.804312 sshd-session[5883]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:42.809304 systemd[1]: sshd@22-172.31.26.5:22-139.178.68.195:56036.service: Deactivated successfully. Dec 16 13:19:42.813535 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 13:19:42.814945 systemd-logind[1972]: Session 22 logged out. Waiting for processes to exit. Dec 16 13:19:42.817471 systemd-logind[1972]: Removed session 22. Dec 16 13:19:44.576192 kubelet[3309]: E1216 13:19:44.575179 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kcbpt" podUID="fa364a9d-bf3b-4e47-9ccb-93bdce84b381" Dec 16 13:19:44.589563 kubelet[3309]: E1216 13:19:44.589521 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c88dfcf94-hn4df" podUID="f26b9dbd-da7d-4346-92a8-22a4fd24016d" Dec 16 13:19:46.575367 kubelet[3309]: E1216 13:19:46.575306 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p2jzr" podUID="0d725eaf-63cb-4894-b5fe-56fa81c91e00" Dec 16 13:19:47.836297 systemd[1]: Started sshd@23-172.31.26.5:22-139.178.68.195:56040.service - OpenSSH per-connection server daemon (139.178.68.195:56040). Dec 16 13:19:48.049088 sshd[5923]: Accepted publickey for core from 139.178.68.195 port 56040 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:19:48.056945 sshd-session[5923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:48.064601 systemd-logind[1972]: New session 23 of user core. Dec 16 13:19:48.073320 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 13:19:48.615199 sshd[5926]: Connection closed by 139.178.68.195 port 56040 Dec 16 13:19:48.617651 sshd-session[5923]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:48.626088 systemd[1]: sshd@23-172.31.26.5:22-139.178.68.195:56040.service: Deactivated successfully. Dec 16 13:19:48.631575 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 13:19:48.638193 systemd-logind[1972]: Session 23 logged out. Waiting for processes to exit. Dec 16 13:19:48.640287 systemd-logind[1972]: Removed session 23. Dec 16 13:19:49.853127 update_engine[1975]: I20251216 13:19:49.852118 1975 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:19:49.853623 update_engine[1975]: I20251216 13:19:49.853166 1975 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:19:49.853623 update_engine[1975]: I20251216 13:19:49.853590 1975 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:19:49.854872 update_engine[1975]: E20251216 13:19:49.854617 1975 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 16 13:19:49.854872 update_engine[1975]: I20251216 13:19:49.854717 1975 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 16 13:19:51.577778 kubelet[3309]: E1216 13:19:51.577709 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6746fb9d97-xwbhb" podUID="c5ba7f20-53de-469e-ab23-68e66f875f69" Dec 16 13:19:53.657015 systemd[1]: Started sshd@24-172.31.26.5:22-139.178.68.195:53230.service - OpenSSH per-connection server daemon (139.178.68.195:53230). Dec 16 13:19:53.921664 sshd[5946]: Accepted publickey for core from 139.178.68.195 port 53230 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:19:53.925904 sshd-session[5946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:53.934353 systemd-logind[1972]: New session 24 of user core. Dec 16 13:19:53.938288 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 13:19:54.439493 sshd[5949]: Connection closed by 139.178.68.195 port 53230 Dec 16 13:19:54.440477 sshd-session[5946]: pam_unix(sshd:session): session closed for user core Dec 16 13:19:54.446024 systemd-logind[1972]: Session 24 logged out. Waiting for processes to exit. Dec 16 13:19:54.446469 systemd[1]: sshd@24-172.31.26.5:22-139.178.68.195:53230.service: Deactivated successfully. Dec 16 13:19:54.450200 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 13:19:54.452813 systemd-logind[1972]: Removed session 24. Dec 16 13:19:54.573242 kubelet[3309]: E1216 13:19:54.573072 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c88dfcf94-jsc58" podUID="51551839-7827-49fc-85bc-04ba08e6c0fe" Dec 16 13:19:55.574961 kubelet[3309]: E1216 13:19:55.574503 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kcbpt" podUID="fa364a9d-bf3b-4e47-9ccb-93bdce84b381" Dec 16 13:19:56.573319 kubelet[3309]: E1216 13:19:56.573267 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c88dfcf94-hn4df" podUID="f26b9dbd-da7d-4346-92a8-22a4fd24016d" Dec 16 13:19:57.578265 kubelet[3309]: E1216 13:19:57.577747 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d6c9b69bb-zds8r" podUID="5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4" Dec 16 13:19:59.479409 systemd[1]: Started sshd@25-172.31.26.5:22-139.178.68.195:53240.service - OpenSSH per-connection server daemon (139.178.68.195:53240). Dec 16 13:19:59.666948 sshd[5962]: Accepted publickey for core from 139.178.68.195 port 53240 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:19:59.668482 sshd-session[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:19:59.674111 systemd-logind[1972]: New session 25 of user core. Dec 16 13:19:59.681486 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 13:19:59.851715 update_engine[1975]: I20251216 13:19:59.851099 1975 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:19:59.851715 update_engine[1975]: I20251216 13:19:59.851200 1975 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:19:59.851715 update_engine[1975]: I20251216 13:19:59.851642 1975 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:19:59.853563 update_engine[1975]: E20251216 13:19:59.853412 1975 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 16 13:19:59.853563 update_engine[1975]: I20251216 13:19:59.853528 1975 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 16 13:19:59.998268 sshd[5965]: Connection closed by 139.178.68.195 port 53240 Dec 16 13:20:00.000346 sshd-session[5962]: pam_unix(sshd:session): session closed for user core Dec 16 13:20:00.009895 systemd[1]: sshd@25-172.31.26.5:22-139.178.68.195:53240.service: Deactivated successfully. Dec 16 13:20:00.014343 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 13:20:00.019534 systemd-logind[1972]: Session 25 logged out. Waiting for processes to exit. Dec 16 13:20:00.023579 systemd-logind[1972]: Removed session 25. Dec 16 13:20:00.576083 kubelet[3309]: E1216 13:20:00.576009 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p2jzr" podUID="0d725eaf-63cb-4894-b5fe-56fa81c91e00" Dec 16 13:20:03.586983 kubelet[3309]: E1216 13:20:03.586921 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6746fb9d97-xwbhb" podUID="c5ba7f20-53de-469e-ab23-68e66f875f69" Dec 16 13:20:05.066388 systemd[1]: Started sshd@26-172.31.26.5:22-139.178.68.195:43064.service - OpenSSH per-connection server daemon (139.178.68.195:43064). Dec 16 13:20:05.312514 sshd[5979]: Accepted publickey for core from 139.178.68.195 port 43064 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:20:05.314411 sshd-session[5979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:20:05.328201 systemd-logind[1972]: New session 26 of user core. Dec 16 13:20:05.334324 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 13:20:05.782092 sshd[5982]: Connection closed by 139.178.68.195 port 43064 Dec 16 13:20:05.783490 sshd-session[5979]: pam_unix(sshd:session): session closed for user core Dec 16 13:20:05.793930 systemd[1]: sshd@26-172.31.26.5:22-139.178.68.195:43064.service: Deactivated successfully. Dec 16 13:20:05.802843 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 13:20:05.806122 systemd-logind[1972]: Session 26 logged out. Waiting for processes to exit. Dec 16 13:20:05.810157 systemd-logind[1972]: Removed session 26. Dec 16 13:20:07.575885 kubelet[3309]: E1216 13:20:07.575241 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c88dfcf94-hn4df" podUID="f26b9dbd-da7d-4346-92a8-22a4fd24016d" Dec 16 13:20:07.575885 kubelet[3309]: E1216 13:20:07.575726 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c88dfcf94-jsc58" podUID="51551839-7827-49fc-85bc-04ba08e6c0fe" Dec 16 13:20:09.577386 kubelet[3309]: E1216 13:20:09.577339 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kcbpt" podUID="fa364a9d-bf3b-4e47-9ccb-93bdce84b381" Dec 16 13:20:09.855193 update_engine[1975]: I20251216 13:20:09.855113 1975 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:20:09.855193 update_engine[1975]: I20251216 13:20:09.855199 1975 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:20:09.856102 update_engine[1975]: I20251216 13:20:09.855726 1975 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:20:09.858517 update_engine[1975]: E20251216 13:20:09.858343 1975 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 16 13:20:09.858897 update_engine[1975]: I20251216 13:20:09.858613 1975 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 16 13:20:09.858897 update_engine[1975]: I20251216 13:20:09.858659 1975 omaha_request_action.cc:617] Omaha request response: Dec 16 13:20:09.859547 update_engine[1975]: E20251216 13:20:09.859499 1975 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 16 13:20:09.859850 update_engine[1975]: I20251216 13:20:09.859552 1975 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 16 13:20:09.859850 update_engine[1975]: I20251216 13:20:09.859565 1975 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 13:20:09.859850 update_engine[1975]: I20251216 13:20:09.859582 1975 update_attempter.cc:306] Processing Done. Dec 16 13:20:09.859850 update_engine[1975]: E20251216 13:20:09.859604 1975 update_attempter.cc:619] Update failed. Dec 16 13:20:09.861555 update_engine[1975]: I20251216 13:20:09.861392 1975 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 16 13:20:09.861555 update_engine[1975]: I20251216 13:20:09.861428 1975 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 16 13:20:09.861555 update_engine[1975]: I20251216 13:20:09.861438 1975 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 16 13:20:09.862490 update_engine[1975]: I20251216 13:20:09.861865 1975 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 16 13:20:09.864098 update_engine[1975]: I20251216 13:20:09.861912 1975 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 16 13:20:09.864098 update_engine[1975]: I20251216 13:20:09.863131 1975 omaha_request_action.cc:272] Request: Dec 16 13:20:09.864098 update_engine[1975]: Dec 16 13:20:09.864098 update_engine[1975]: Dec 16 13:20:09.864098 update_engine[1975]: Dec 16 13:20:09.864098 update_engine[1975]: Dec 16 13:20:09.864098 update_engine[1975]: Dec 16 13:20:09.864098 update_engine[1975]: Dec 16 13:20:09.864098 update_engine[1975]: I20251216 13:20:09.863154 1975 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 16 13:20:09.864098 update_engine[1975]: I20251216 13:20:09.863279 1975 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 16 13:20:09.865178 update_engine[1975]: I20251216 13:20:09.864583 1975 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 16 13:20:09.865971 update_engine[1975]: E20251216 13:20:09.865532 1975 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 16 13:20:09.865971 update_engine[1975]: I20251216 13:20:09.865646 1975 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 16 13:20:09.865971 update_engine[1975]: I20251216 13:20:09.865669 1975 omaha_request_action.cc:617] Omaha request response: Dec 16 13:20:09.865971 update_engine[1975]: I20251216 13:20:09.865679 1975 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 13:20:09.865971 update_engine[1975]: I20251216 13:20:09.865685 1975 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 16 13:20:09.865971 update_engine[1975]: I20251216 13:20:09.865692 1975 update_attempter.cc:306] Processing Done. Dec 16 13:20:09.865971 update_engine[1975]: I20251216 13:20:09.865701 1975 update_attempter.cc:310] Error event sent. Dec 16 13:20:09.865971 update_engine[1975]: I20251216 13:20:09.865712 1975 update_check_scheduler.cc:74] Next update check in 46m30s Dec 16 13:20:09.868138 locksmithd[2036]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 16 13:20:09.868138 locksmithd[2036]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 16 13:20:11.583342 containerd[1997]: time="2025-12-16T13:20:11.583021227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:20:11.584488 kubelet[3309]: E1216 13:20:11.584429 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p2jzr" podUID="0d725eaf-63cb-4894-b5fe-56fa81c91e00" Dec 16 13:20:11.920185 containerd[1997]: time="2025-12-16T13:20:11.919951116Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:20:11.922185 containerd[1997]: time="2025-12-16T13:20:11.922132969Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:20:11.926532 kubelet[3309]: E1216 13:20:11.924708 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:20:11.926532 kubelet[3309]: E1216 13:20:11.924774 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:20:11.926532 kubelet[3309]: E1216 13:20:11.924935 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8h5xj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5d6c9b69bb-zds8r_calico-system(5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:20:11.926532 kubelet[3309]: E1216 13:20:11.926136 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d6c9b69bb-zds8r" podUID="5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4" Dec 16 13:20:11.950916 containerd[1997]: time="2025-12-16T13:20:11.922363230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:20:17.573573 containerd[1997]: time="2025-12-16T13:20:17.573513342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:20:17.851951 containerd[1997]: time="2025-12-16T13:20:17.851610789Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:20:17.853878 containerd[1997]: time="2025-12-16T13:20:17.853811722Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:20:17.854076 containerd[1997]: time="2025-12-16T13:20:17.853843211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:20:17.854137 kubelet[3309]: E1216 13:20:17.854099 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:20:17.854501 kubelet[3309]: E1216 13:20:17.854154 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:20:17.854501 kubelet[3309]: E1216 13:20:17.854290 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3b88f196c3d5438da0becadb94577a42,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7pxf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6746fb9d97-xwbhb_calico-system(c5ba7f20-53de-469e-ab23-68e66f875f69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:20:17.856779 containerd[1997]: time="2025-12-16T13:20:17.856723501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:20:18.123905 containerd[1997]: time="2025-12-16T13:20:18.123753781Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:20:18.126196 containerd[1997]: time="2025-12-16T13:20:18.126139209Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:20:18.126353 containerd[1997]: time="2025-12-16T13:20:18.126258218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:20:18.126517 kubelet[3309]: E1216 13:20:18.126451 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:20:18.126618 kubelet[3309]: E1216 13:20:18.126528 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:20:18.126719 kubelet[3309]: E1216 13:20:18.126670 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7pxf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6746fb9d97-xwbhb_calico-system(c5ba7f20-53de-469e-ab23-68e66f875f69): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:20:18.128112 kubelet[3309]: E1216 13:20:18.128030 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6746fb9d97-xwbhb" podUID="c5ba7f20-53de-469e-ab23-68e66f875f69" Dec 16 13:20:19.574398 containerd[1997]: time="2025-12-16T13:20:19.573959510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:20:19.831686 containerd[1997]: time="2025-12-16T13:20:19.831539189Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:20:19.833771 containerd[1997]: time="2025-12-16T13:20:19.833725110Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:20:19.833896 containerd[1997]: time="2025-12-16T13:20:19.833818740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:20:19.834029 kubelet[3309]: E1216 13:20:19.833991 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:20:19.834378 kubelet[3309]: E1216 13:20:19.834036 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:20:19.834378 kubelet[3309]: E1216 13:20:19.834190 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55kdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c88dfcf94-hn4df_calico-apiserver(f26b9dbd-da7d-4346-92a8-22a4fd24016d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:20:19.835405 kubelet[3309]: E1216 13:20:19.835364 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c88dfcf94-hn4df" podUID="f26b9dbd-da7d-4346-92a8-22a4fd24016d" Dec 16 13:20:22.573543 containerd[1997]: time="2025-12-16T13:20:22.573488594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:20:22.844489 containerd[1997]: time="2025-12-16T13:20:22.844362107Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:20:22.846604 containerd[1997]: time="2025-12-16T13:20:22.846493418Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:20:22.846604 containerd[1997]: time="2025-12-16T13:20:22.846594658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:20:22.846790 kubelet[3309]: E1216 13:20:22.846751 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:20:22.847131 kubelet[3309]: E1216 13:20:22.846797 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:20:22.847131 kubelet[3309]: E1216 13:20:22.846912 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hx4qm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c88dfcf94-jsc58_calico-apiserver(51551839-7827-49fc-85bc-04ba08e6c0fe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:20:22.848176 kubelet[3309]: E1216 13:20:22.848109 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c88dfcf94-jsc58" podUID="51551839-7827-49fc-85bc-04ba08e6c0fe" Dec 16 13:20:23.573613 kubelet[3309]: E1216 13:20:23.573559 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d6c9b69bb-zds8r" podUID="5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4" Dec 16 13:20:24.572879 containerd[1997]: time="2025-12-16T13:20:24.572830419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:20:24.840154 containerd[1997]: time="2025-12-16T13:20:24.840008238Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:20:24.842166 containerd[1997]: time="2025-12-16T13:20:24.842076197Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:20:24.842377 containerd[1997]: time="2025-12-16T13:20:24.842079429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:20:24.842504 kubelet[3309]: E1216 13:20:24.842327 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:20:24.842504 kubelet[3309]: E1216 13:20:24.842372 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:20:24.842828 kubelet[3309]: E1216 13:20:24.842512 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbvqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kcbpt_calico-system(fa364a9d-bf3b-4e47-9ccb-93bdce84b381): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:20:24.844036 kubelet[3309]: E1216 13:20:24.843988 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kcbpt" podUID="fa364a9d-bf3b-4e47-9ccb-93bdce84b381" Dec 16 13:20:25.155908 systemd[1]: cri-containerd-1531e66d7b0b1be49212f140311999ec54780f32bc38c343ba4e453240980793.scope: Deactivated successfully. Dec 16 13:20:25.156697 systemd[1]: cri-containerd-1531e66d7b0b1be49212f140311999ec54780f32bc38c343ba4e453240980793.scope: Consumed 13.773s CPU time, 113M memory peak, 45.4M read from disk. Dec 16 13:20:25.250710 containerd[1997]: time="2025-12-16T13:20:25.250651492Z" level=info msg="received container exit event container_id:\"1531e66d7b0b1be49212f140311999ec54780f32bc38c343ba4e453240980793\" id:\"1531e66d7b0b1be49212f140311999ec54780f32bc38c343ba4e453240980793\" pid:3628 exit_status:1 exited_at:{seconds:1765891225 nanos:198479363}" Dec 16 13:20:25.358496 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1531e66d7b0b1be49212f140311999ec54780f32bc38c343ba4e453240980793-rootfs.mount: Deactivated successfully. Dec 16 13:20:25.914577 kubelet[3309]: I1216 13:20:25.914529 3309 scope.go:117] "RemoveContainer" containerID="1531e66d7b0b1be49212f140311999ec54780f32bc38c343ba4e453240980793" Dec 16 13:20:25.926854 containerd[1997]: time="2025-12-16T13:20:25.926801270Z" level=info msg="CreateContainer within sandbox \"bf9eb17d62a9146217dae2e044d3d27bec1311835a5975a5c1b6c015a57faf8f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 16 13:20:25.961290 containerd[1997]: time="2025-12-16T13:20:25.961250044Z" level=info msg="Container 8e3e974008f1fb3bebc51d31b9db38e540bd2a071eb01f882ed0cd57b808df8f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:20:26.078660 containerd[1997]: time="2025-12-16T13:20:26.078603403Z" level=info msg="CreateContainer within sandbox \"bf9eb17d62a9146217dae2e044d3d27bec1311835a5975a5c1b6c015a57faf8f\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"8e3e974008f1fb3bebc51d31b9db38e540bd2a071eb01f882ed0cd57b808df8f\"" Dec 16 13:20:26.079405 containerd[1997]: time="2025-12-16T13:20:26.079372185Z" level=info msg="StartContainer for \"8e3e974008f1fb3bebc51d31b9db38e540bd2a071eb01f882ed0cd57b808df8f\"" Dec 16 13:20:26.080578 containerd[1997]: time="2025-12-16T13:20:26.080545706Z" level=info msg="connecting to shim 8e3e974008f1fb3bebc51d31b9db38e540bd2a071eb01f882ed0cd57b808df8f" address="unix:///run/containerd/s/483786798ebf588f00a1fc132bcf1a4bb806ab9e31c1d46771b556a3118f2539" protocol=ttrpc version=3 Dec 16 13:20:26.127300 systemd[1]: Started cri-containerd-8e3e974008f1fb3bebc51d31b9db38e540bd2a071eb01f882ed0cd57b808df8f.scope - libcontainer container 8e3e974008f1fb3bebc51d31b9db38e540bd2a071eb01f882ed0cd57b808df8f. Dec 16 13:20:26.178863 containerd[1997]: time="2025-12-16T13:20:26.178779406Z" level=info msg="StartContainer for \"8e3e974008f1fb3bebc51d31b9db38e540bd2a071eb01f882ed0cd57b808df8f\" returns successfully" Dec 16 13:20:26.574370 containerd[1997]: time="2025-12-16T13:20:26.574154888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:20:26.594831 systemd[1]: cri-containerd-8e527e8a47a9f8f19dd0c1401e0f621b4e05f5c98e29deba858437d5584d6acf.scope: Deactivated successfully. Dec 16 13:20:26.595240 systemd[1]: cri-containerd-8e527e8a47a9f8f19dd0c1401e0f621b4e05f5c98e29deba858437d5584d6acf.scope: Consumed 3.839s CPU time, 85.5M memory peak, 59.1M read from disk. Dec 16 13:20:26.601501 containerd[1997]: time="2025-12-16T13:20:26.601244881Z" level=info msg="received container exit event container_id:\"8e527e8a47a9f8f19dd0c1401e0f621b4e05f5c98e29deba858437d5584d6acf\" id:\"8e527e8a47a9f8f19dd0c1401e0f621b4e05f5c98e29deba858437d5584d6acf\" pid:3142 exit_status:1 exited_at:{seconds:1765891226 nanos:600661101}" Dec 16 13:20:26.633851 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e527e8a47a9f8f19dd0c1401e0f621b4e05f5c98e29deba858437d5584d6acf-rootfs.mount: Deactivated successfully. Dec 16 13:20:26.870740 containerd[1997]: time="2025-12-16T13:20:26.870598950Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:20:26.872860 containerd[1997]: time="2025-12-16T13:20:26.872806835Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:20:26.873035 containerd[1997]: time="2025-12-16T13:20:26.872900121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:20:26.873140 kubelet[3309]: E1216 13:20:26.873092 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:20:26.873238 kubelet[3309]: E1216 13:20:26.873149 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:20:26.873295 kubelet[3309]: E1216 13:20:26.873251 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fbmzf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-p2jzr_calico-system(0d725eaf-63cb-4894-b5fe-56fa81c91e00): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:20:26.875418 containerd[1997]: time="2025-12-16T13:20:26.875383331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:20:26.913549 kubelet[3309]: I1216 13:20:26.913516 3309 scope.go:117] "RemoveContainer" containerID="8e527e8a47a9f8f19dd0c1401e0f621b4e05f5c98e29deba858437d5584d6acf" Dec 16 13:20:26.923093 containerd[1997]: time="2025-12-16T13:20:26.922884859Z" level=info msg="CreateContainer within sandbox \"1c56f0250e1708f1359935890a3cda2f20397f4123fd051c986129eac98a34c3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 16 13:20:26.952401 containerd[1997]: time="2025-12-16T13:20:26.951378175Z" level=info msg="Container 1a143a455ae0bf6e4e4b3c7374f275a9f9ef4ae064c8f4dc96c187b01350f816: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:20:26.955098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1551058238.mount: Deactivated successfully. Dec 16 13:20:26.959568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3800325647.mount: Deactivated successfully. Dec 16 13:20:26.968275 containerd[1997]: time="2025-12-16T13:20:26.968227853Z" level=info msg="CreateContainer within sandbox \"1c56f0250e1708f1359935890a3cda2f20397f4123fd051c986129eac98a34c3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"1a143a455ae0bf6e4e4b3c7374f275a9f9ef4ae064c8f4dc96c187b01350f816\"" Dec 16 13:20:26.968964 containerd[1997]: time="2025-12-16T13:20:26.968931359Z" level=info msg="StartContainer for \"1a143a455ae0bf6e4e4b3c7374f275a9f9ef4ae064c8f4dc96c187b01350f816\"" Dec 16 13:20:26.970066 containerd[1997]: time="2025-12-16T13:20:26.970018187Z" level=info msg="connecting to shim 1a143a455ae0bf6e4e4b3c7374f275a9f9ef4ae064c8f4dc96c187b01350f816" address="unix:///run/containerd/s/c48623898b99735315fa7c2126345cac4fb62742ba674ba2801bf263032d7fa8" protocol=ttrpc version=3 Dec 16 13:20:26.993285 systemd[1]: Started cri-containerd-1a143a455ae0bf6e4e4b3c7374f275a9f9ef4ae064c8f4dc96c187b01350f816.scope - libcontainer container 1a143a455ae0bf6e4e4b3c7374f275a9f9ef4ae064c8f4dc96c187b01350f816. Dec 16 13:20:27.077200 containerd[1997]: time="2025-12-16T13:20:27.077169746Z" level=info msg="StartContainer for \"1a143a455ae0bf6e4e4b3c7374f275a9f9ef4ae064c8f4dc96c187b01350f816\" returns successfully" Dec 16 13:20:27.194640 containerd[1997]: time="2025-12-16T13:20:27.194490722Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:20:27.197918 containerd[1997]: time="2025-12-16T13:20:27.197854410Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:20:27.198816 containerd[1997]: time="2025-12-16T13:20:27.197941787Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:20:27.199101 kubelet[3309]: E1216 13:20:27.199007 3309 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:20:27.199427 kubelet[3309]: E1216 13:20:27.199105 3309 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:20:27.199427 kubelet[3309]: E1216 13:20:27.199211 3309 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fbmzf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-p2jzr_calico-system(0d725eaf-63cb-4894-b5fe-56fa81c91e00): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:20:27.200480 kubelet[3309]: E1216 13:20:27.200411 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-p2jzr" podUID="0d725eaf-63cb-4894-b5fe-56fa81c91e00" Dec 16 13:20:30.158334 systemd[1]: cri-containerd-e78b127957bface6463a3ee73bceddb6a06917153fd8f6c0455c3d8e1552234b.scope: Deactivated successfully. Dec 16 13:20:30.158952 systemd[1]: cri-containerd-e78b127957bface6463a3ee73bceddb6a06917153fd8f6c0455c3d8e1552234b.scope: Consumed 2.854s CPU time, 37.5M memory peak, 34M read from disk. Dec 16 13:20:30.160776 containerd[1997]: time="2025-12-16T13:20:30.160515082Z" level=info msg="received container exit event container_id:\"e78b127957bface6463a3ee73bceddb6a06917153fd8f6c0455c3d8e1552234b\" id:\"e78b127957bface6463a3ee73bceddb6a06917153fd8f6c0455c3d8e1552234b\" pid:3153 exit_status:1 exited_at:{seconds:1765891230 nanos:159518432}" Dec 16 13:20:30.192755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e78b127957bface6463a3ee73bceddb6a06917153fd8f6c0455c3d8e1552234b-rootfs.mount: Deactivated successfully. Dec 16 13:20:30.934930 kubelet[3309]: I1216 13:20:30.934617 3309 scope.go:117] "RemoveContainer" containerID="e78b127957bface6463a3ee73bceddb6a06917153fd8f6c0455c3d8e1552234b" Dec 16 13:20:30.937824 containerd[1997]: time="2025-12-16T13:20:30.937602132Z" level=info msg="CreateContainer within sandbox \"dce4e6355a576114d8a3d4975635c48a9790298fd9b9e7acd99cb1c77227bdff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 16 13:20:30.955819 containerd[1997]: time="2025-12-16T13:20:30.955389929Z" level=info msg="Container a4f2345968b3ac84ec3965015f27a1257ff2bf3f2a622c30101362e6fb721095: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:20:30.973714 containerd[1997]: time="2025-12-16T13:20:30.973667768Z" level=info msg="CreateContainer within sandbox \"dce4e6355a576114d8a3d4975635c48a9790298fd9b9e7acd99cb1c77227bdff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a4f2345968b3ac84ec3965015f27a1257ff2bf3f2a622c30101362e6fb721095\"" Dec 16 13:20:30.974290 containerd[1997]: time="2025-12-16T13:20:30.974247212Z" level=info msg="StartContainer for \"a4f2345968b3ac84ec3965015f27a1257ff2bf3f2a622c30101362e6fb721095\"" Dec 16 13:20:30.975447 containerd[1997]: time="2025-12-16T13:20:30.975407117Z" level=info msg="connecting to shim a4f2345968b3ac84ec3965015f27a1257ff2bf3f2a622c30101362e6fb721095" address="unix:///run/containerd/s/d6a2e17957bc7d25295886fe7931416d7f619b47e1020f42cad61a0fe774beac" protocol=ttrpc version=3 Dec 16 13:20:31.005306 systemd[1]: Started cri-containerd-a4f2345968b3ac84ec3965015f27a1257ff2bf3f2a622c30101362e6fb721095.scope - libcontainer container a4f2345968b3ac84ec3965015f27a1257ff2bf3f2a622c30101362e6fb721095. Dec 16 13:20:31.066596 containerd[1997]: time="2025-12-16T13:20:31.066551288Z" level=info msg="StartContainer for \"a4f2345968b3ac84ec3965015f27a1257ff2bf3f2a622c30101362e6fb721095\" returns successfully" Dec 16 13:20:32.573171 kubelet[3309]: E1216 13:20:32.573126 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6746fb9d97-xwbhb" podUID="c5ba7f20-53de-469e-ab23-68e66f875f69" Dec 16 13:20:33.575364 kubelet[3309]: E1216 13:20:33.575318 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c88dfcf94-jsc58" podUID="51551839-7827-49fc-85bc-04ba08e6c0fe" Dec 16 13:20:34.503102 kubelet[3309]: E1216 13:20:34.503008 3309 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-5?timeout=10s\": context deadline exceeded" Dec 16 13:20:34.573147 kubelet[3309]: E1216 13:20:34.573103 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c88dfcf94-hn4df" podUID="f26b9dbd-da7d-4346-92a8-22a4fd24016d" Dec 16 13:20:36.573389 kubelet[3309]: E1216 13:20:36.573306 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kcbpt" podUID="fa364a9d-bf3b-4e47-9ccb-93bdce84b381" Dec 16 13:20:37.573032 kubelet[3309]: E1216 13:20:37.572896 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d6c9b69bb-zds8r" podUID="5a4c47ac-d579-4a1f-ba5b-fd9d5a3cbce4" Dec 16 13:20:37.837209 systemd[1]: cri-containerd-8e3e974008f1fb3bebc51d31b9db38e540bd2a071eb01f882ed0cd57b808df8f.scope: Deactivated successfully. Dec 16 13:20:37.838064 systemd[1]: cri-containerd-8e3e974008f1fb3bebc51d31b9db38e540bd2a071eb01f882ed0cd57b808df8f.scope: Consumed 368ms CPU time, 65.6M memory peak, 29.3M read from disk. Dec 16 13:20:37.839480 containerd[1997]: time="2025-12-16T13:20:37.839379976Z" level=info msg="received container exit event container_id:\"8e3e974008f1fb3bebc51d31b9db38e540bd2a071eb01f882ed0cd57b808df8f\" id:\"8e3e974008f1fb3bebc51d31b9db38e540bd2a071eb01f882ed0cd57b808df8f\" pid:6067 exit_status:1 exited_at:{seconds:1765891237 nanos:837656399}" Dec 16 13:20:37.869862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e3e974008f1fb3bebc51d31b9db38e540bd2a071eb01f882ed0cd57b808df8f-rootfs.mount: Deactivated successfully. Dec 16 13:20:37.964691 kubelet[3309]: I1216 13:20:37.964664 3309 scope.go:117] "RemoveContainer" containerID="1531e66d7b0b1be49212f140311999ec54780f32bc38c343ba4e453240980793" Dec 16 13:20:37.965401 kubelet[3309]: I1216 13:20:37.965378 3309 scope.go:117] "RemoveContainer" containerID="8e3e974008f1fb3bebc51d31b9db38e540bd2a071eb01f882ed0cd57b808df8f" Dec 16 13:20:37.965583 kubelet[3309]: E1216 13:20:37.965554 3309 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-m8zdq_tigera-operator(06589e1a-b43c-4aac-8144-493cd7ae2612)\"" pod="tigera-operator/tigera-operator-7dcd859c48-m8zdq" podUID="06589e1a-b43c-4aac-8144-493cd7ae2612" Dec 16 13:20:37.986440 containerd[1997]: time="2025-12-16T13:20:37.986389363Z" level=info msg="RemoveContainer for \"1531e66d7b0b1be49212f140311999ec54780f32bc38c343ba4e453240980793\"" Dec 16 13:20:38.013783 containerd[1997]: time="2025-12-16T13:20:38.013719197Z" level=info msg="RemoveContainer for \"1531e66d7b0b1be49212f140311999ec54780f32bc38c343ba4e453240980793\" returns successfully"