Nov 24 00:06:51.897508 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Nov 23 20:49:05 -00 2025 Nov 24 00:06:51.897547 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 00:06:51.897566 kernel: BIOS-provided physical RAM map: Nov 24 00:06:51.897579 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 24 00:06:51.897590 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Nov 24 00:06:51.897602 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Nov 24 00:06:51.897617 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Nov 24 00:06:51.897630 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Nov 24 00:06:51.897642 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Nov 24 00:06:51.897655 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Nov 24 00:06:51.897668 kernel: NX (Execute Disable) protection: active Nov 24 00:06:51.897683 kernel: APIC: Static calls initialized Nov 24 00:06:51.897695 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Nov 24 00:06:51.897708 kernel: extended physical RAM map: Nov 24 00:06:51.897724 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 24 00:06:51.897737 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Nov 24 00:06:51.897754 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Nov 24 00:06:51.897767 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Nov 24 00:06:51.897782 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Nov 24 00:06:51.897795 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Nov 24 00:06:51.897809 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Nov 24 00:06:51.897823 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Nov 24 00:06:51.897837 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Nov 24 00:06:51.897850 kernel: efi: EFI v2.7 by EDK II Nov 24 00:06:51.897864 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77002518 Nov 24 00:06:51.897877 kernel: secureboot: Secure boot disabled Nov 24 00:06:51.897891 kernel: SMBIOS 2.7 present. Nov 24 00:06:51.897906 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Nov 24 00:06:51.897920 kernel: DMI: Memory slots populated: 1/1 Nov 24 00:06:51.897933 kernel: Hypervisor detected: KVM Nov 24 00:06:51.897947 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Nov 24 00:06:51.897960 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 24 00:06:51.897974 kernel: kvm-clock: using sched offset of 5529199298 cycles Nov 24 00:06:51.897989 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 24 00:06:51.898003 kernel: tsc: Detected 2499.998 MHz processor Nov 24 00:06:51.898017 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 24 00:06:51.898047 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 24 00:06:51.898063 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Nov 24 00:06:51.898076 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 24 00:06:51.898088 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 24 00:06:51.898106 kernel: Using GB pages for direct mapping Nov 24 00:06:51.898119 kernel: ACPI: Early table checksum verification disabled Nov 24 00:06:51.898132 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Nov 24 00:06:51.898145 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Nov 24 00:06:51.898163 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 24 00:06:51.898175 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Nov 24 00:06:51.898189 kernel: ACPI: FACS 0x00000000789D0000 000040 Nov 24 00:06:51.898202 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Nov 24 00:06:51.898215 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 24 00:06:51.898229 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 24 00:06:51.898243 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Nov 24 00:06:51.898256 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Nov 24 00:06:51.898273 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 24 00:06:51.898286 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 24 00:06:51.898300 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Nov 24 00:06:51.898313 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Nov 24 00:06:51.898327 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Nov 24 00:06:51.898341 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Nov 24 00:06:51.898354 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Nov 24 00:06:51.898368 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Nov 24 00:06:51.898384 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Nov 24 00:06:51.898398 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Nov 24 00:06:51.898412 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Nov 24 00:06:51.898425 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Nov 24 00:06:51.898440 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Nov 24 00:06:51.898453 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Nov 24 00:06:51.898467 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Nov 24 00:06:51.898480 kernel: NUMA: Initialized distance table, cnt=1 Nov 24 00:06:51.898492 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Nov 24 00:06:51.898506 kernel: Zone ranges: Nov 24 00:06:51.898522 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 24 00:06:51.898534 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Nov 24 00:06:51.898548 kernel: Normal empty Nov 24 00:06:51.898561 kernel: Device empty Nov 24 00:06:51.898575 kernel: Movable zone start for each node Nov 24 00:06:51.898588 kernel: Early memory node ranges Nov 24 00:06:51.898602 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 24 00:06:51.898615 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Nov 24 00:06:51.898627 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Nov 24 00:06:51.898646 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Nov 24 00:06:51.898658 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 24 00:06:51.898670 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 24 00:06:51.898683 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 24 00:06:51.898697 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Nov 24 00:06:51.898712 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 24 00:06:51.898725 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 24 00:06:51.898738 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Nov 24 00:06:51.898752 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 24 00:06:51.898770 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 24 00:06:51.898782 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 24 00:06:51.898795 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 24 00:06:51.898809 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 24 00:06:51.898823 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 24 00:06:51.898836 kernel: TSC deadline timer available Nov 24 00:06:51.898848 kernel: CPU topo: Max. logical packages: 1 Nov 24 00:06:51.898861 kernel: CPU topo: Max. logical dies: 1 Nov 24 00:06:51.898874 kernel: CPU topo: Max. dies per package: 1 Nov 24 00:06:51.898887 kernel: CPU topo: Max. threads per core: 2 Nov 24 00:06:51.898904 kernel: CPU topo: Num. cores per package: 1 Nov 24 00:06:51.898919 kernel: CPU topo: Num. threads per package: 2 Nov 24 00:06:51.898932 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 24 00:06:51.898946 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 24 00:06:51.898960 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Nov 24 00:06:51.898974 kernel: Booting paravirtualized kernel on KVM Nov 24 00:06:51.898989 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 24 00:06:51.899002 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 24 00:06:51.899017 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 24 00:06:51.900166 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 24 00:06:51.900185 kernel: pcpu-alloc: [0] 0 1 Nov 24 00:06:51.900200 kernel: kvm-guest: PV spinlocks enabled Nov 24 00:06:51.900216 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 24 00:06:51.900233 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 00:06:51.900249 kernel: random: crng init done Nov 24 00:06:51.900264 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 24 00:06:51.900279 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 24 00:06:51.900299 kernel: Fallback order for Node 0: 0 Nov 24 00:06:51.900314 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Nov 24 00:06:51.900329 kernel: Policy zone: DMA32 Nov 24 00:06:51.900355 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 24 00:06:51.900375 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 24 00:06:51.900391 kernel: Kernel/User page tables isolation: enabled Nov 24 00:06:51.900407 kernel: ftrace: allocating 40103 entries in 157 pages Nov 24 00:06:51.900422 kernel: ftrace: allocated 157 pages with 5 groups Nov 24 00:06:51.900438 kernel: Dynamic Preempt: voluntary Nov 24 00:06:51.900454 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 24 00:06:51.900471 kernel: rcu: RCU event tracing is enabled. Nov 24 00:06:51.900490 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 24 00:06:51.900506 kernel: Trampoline variant of Tasks RCU enabled. Nov 24 00:06:51.900522 kernel: Rude variant of Tasks RCU enabled. Nov 24 00:06:51.900538 kernel: Tracing variant of Tasks RCU enabled. Nov 24 00:06:51.900554 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 24 00:06:51.900570 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 24 00:06:51.900590 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 00:06:51.900606 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 00:06:51.900622 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 00:06:51.900638 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 24 00:06:51.900654 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 24 00:06:51.900670 kernel: Console: colour dummy device 80x25 Nov 24 00:06:51.900686 kernel: printk: legacy console [tty0] enabled Nov 24 00:06:51.900701 kernel: printk: legacy console [ttyS0] enabled Nov 24 00:06:51.900720 kernel: ACPI: Core revision 20240827 Nov 24 00:06:51.900736 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Nov 24 00:06:51.900752 kernel: APIC: Switch to symmetric I/O mode setup Nov 24 00:06:51.900768 kernel: x2apic enabled Nov 24 00:06:51.900783 kernel: APIC: Switched APIC routing to: physical x2apic Nov 24 00:06:51.900800 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Nov 24 00:06:51.900816 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Nov 24 00:06:51.900832 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 24 00:06:51.900847 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 24 00:06:51.900866 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 24 00:06:51.900882 kernel: Spectre V2 : Mitigation: Retpolines Nov 24 00:06:51.900897 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 24 00:06:51.900913 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 24 00:06:51.900928 kernel: RETBleed: Vulnerable Nov 24 00:06:51.900944 kernel: Speculative Store Bypass: Vulnerable Nov 24 00:06:51.900959 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Nov 24 00:06:51.900975 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 24 00:06:51.900991 kernel: GDS: Unknown: Dependent on hypervisor status Nov 24 00:06:51.901006 kernel: active return thunk: its_return_thunk Nov 24 00:06:51.901021 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 24 00:06:51.901057 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 24 00:06:51.901072 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 24 00:06:51.901085 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 24 00:06:51.901098 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 24 00:06:51.901112 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 24 00:06:51.901126 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 24 00:06:51.901140 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 24 00:06:51.901153 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 24 00:06:51.901168 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 24 00:06:51.901183 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 24 00:06:51.901198 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 24 00:06:51.901214 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 24 00:06:51.901228 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Nov 24 00:06:51.901241 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Nov 24 00:06:51.901254 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Nov 24 00:06:51.901267 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Nov 24 00:06:51.901281 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Nov 24 00:06:51.901294 kernel: Freeing SMP alternatives memory: 32K Nov 24 00:06:51.901309 kernel: pid_max: default: 32768 minimum: 301 Nov 24 00:06:51.901322 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 24 00:06:51.901335 kernel: landlock: Up and running. Nov 24 00:06:51.901351 kernel: SELinux: Initializing. Nov 24 00:06:51.901366 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 24 00:06:51.901386 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 24 00:06:51.901402 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 24 00:06:51.901416 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 24 00:06:51.901429 kernel: signal: max sigframe size: 3632 Nov 24 00:06:51.901444 kernel: rcu: Hierarchical SRCU implementation. Nov 24 00:06:51.901460 kernel: rcu: Max phase no-delay instances is 400. Nov 24 00:06:51.901474 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 24 00:06:51.901488 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 24 00:06:51.901501 kernel: smp: Bringing up secondary CPUs ... Nov 24 00:06:51.901519 kernel: smpboot: x86: Booting SMP configuration: Nov 24 00:06:51.901532 kernel: .... node #0, CPUs: #1 Nov 24 00:06:51.901546 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 24 00:06:51.901561 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 24 00:06:51.901574 kernel: smp: Brought up 1 node, 2 CPUs Nov 24 00:06:51.901587 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Nov 24 00:06:51.901601 kernel: Memory: 1899860K/2037804K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46200K init, 2560K bss, 133380K reserved, 0K cma-reserved) Nov 24 00:06:51.901615 kernel: devtmpfs: initialized Nov 24 00:06:51.901628 kernel: x86/mm: Memory block size: 128MB Nov 24 00:06:51.901646 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Nov 24 00:06:51.901661 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 24 00:06:51.901674 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 24 00:06:51.906050 kernel: pinctrl core: initialized pinctrl subsystem Nov 24 00:06:51.906077 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 24 00:06:51.906092 kernel: audit: initializing netlink subsys (disabled) Nov 24 00:06:51.906106 kernel: audit: type=2000 audit(1763942809.413:1): state=initialized audit_enabled=0 res=1 Nov 24 00:06:51.906120 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 24 00:06:51.906143 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 24 00:06:51.906159 kernel: cpuidle: using governor menu Nov 24 00:06:51.906175 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 24 00:06:51.906191 kernel: dca service started, version 1.12.1 Nov 24 00:06:51.906205 kernel: PCI: Using configuration type 1 for base access Nov 24 00:06:51.906220 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 24 00:06:51.906235 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 24 00:06:51.906249 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 24 00:06:51.906264 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 24 00:06:51.906280 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 24 00:06:51.906293 kernel: ACPI: Added _OSI(Module Device) Nov 24 00:06:51.906311 kernel: ACPI: Added _OSI(Processor Device) Nov 24 00:06:51.906329 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 24 00:06:51.906349 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 24 00:06:51.906369 kernel: ACPI: Interpreter enabled Nov 24 00:06:51.906387 kernel: ACPI: PM: (supports S0 S5) Nov 24 00:06:51.906407 kernel: ACPI: Using IOAPIC for interrupt routing Nov 24 00:06:51.906421 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 24 00:06:51.906436 kernel: PCI: Using E820 reservations for host bridge windows Nov 24 00:06:51.906455 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 24 00:06:51.906471 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 24 00:06:51.906744 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 24 00:06:51.906890 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 24 00:06:51.907044 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 24 00:06:51.907065 kernel: acpiphp: Slot [3] registered Nov 24 00:06:51.907081 kernel: acpiphp: Slot [4] registered Nov 24 00:06:51.907102 kernel: acpiphp: Slot [5] registered Nov 24 00:06:51.907118 kernel: acpiphp: Slot [6] registered Nov 24 00:06:51.907134 kernel: acpiphp: Slot [7] registered Nov 24 00:06:51.907149 kernel: acpiphp: Slot [8] registered Nov 24 00:06:51.907165 kernel: acpiphp: Slot [9] registered Nov 24 00:06:51.907180 kernel: acpiphp: Slot [10] registered Nov 24 00:06:51.907196 kernel: acpiphp: Slot [11] registered Nov 24 00:06:51.907211 kernel: acpiphp: Slot [12] registered Nov 24 00:06:51.907227 kernel: acpiphp: Slot [13] registered Nov 24 00:06:51.907245 kernel: acpiphp: Slot [14] registered Nov 24 00:06:51.907261 kernel: acpiphp: Slot [15] registered Nov 24 00:06:51.907276 kernel: acpiphp: Slot [16] registered Nov 24 00:06:51.907291 kernel: acpiphp: Slot [17] registered Nov 24 00:06:51.907307 kernel: acpiphp: Slot [18] registered Nov 24 00:06:51.907323 kernel: acpiphp: Slot [19] registered Nov 24 00:06:51.907338 kernel: acpiphp: Slot [20] registered Nov 24 00:06:51.907353 kernel: acpiphp: Slot [21] registered Nov 24 00:06:51.907369 kernel: acpiphp: Slot [22] registered Nov 24 00:06:51.907384 kernel: acpiphp: Slot [23] registered Nov 24 00:06:51.907403 kernel: acpiphp: Slot [24] registered Nov 24 00:06:51.907418 kernel: acpiphp: Slot [25] registered Nov 24 00:06:51.907434 kernel: acpiphp: Slot [26] registered Nov 24 00:06:51.907449 kernel: acpiphp: Slot [27] registered Nov 24 00:06:51.907465 kernel: acpiphp: Slot [28] registered Nov 24 00:06:51.907480 kernel: acpiphp: Slot [29] registered Nov 24 00:06:51.907496 kernel: acpiphp: Slot [30] registered Nov 24 00:06:51.907511 kernel: acpiphp: Slot [31] registered Nov 24 00:06:51.907527 kernel: PCI host bridge to bus 0000:00 Nov 24 00:06:51.907680 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 24 00:06:51.907821 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 24 00:06:51.907947 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 24 00:06:51.909827 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 24 00:06:51.909989 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Nov 24 00:06:51.910132 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 24 00:06:51.910304 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Nov 24 00:06:51.910459 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Nov 24 00:06:51.910609 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Nov 24 00:06:51.910755 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 24 00:06:51.910885 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Nov 24 00:06:51.911012 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Nov 24 00:06:51.911175 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Nov 24 00:06:51.911310 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Nov 24 00:06:51.911440 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Nov 24 00:06:51.911570 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Nov 24 00:06:51.911709 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Nov 24 00:06:51.911856 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Nov 24 00:06:51.911987 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Nov 24 00:06:51.914227 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 24 00:06:51.914399 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Nov 24 00:06:51.914527 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Nov 24 00:06:51.914659 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Nov 24 00:06:51.914782 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Nov 24 00:06:51.914801 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 24 00:06:51.914816 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 24 00:06:51.914831 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 24 00:06:51.914849 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 24 00:06:51.914863 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 24 00:06:51.914878 kernel: iommu: Default domain type: Translated Nov 24 00:06:51.914892 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 24 00:06:51.914906 kernel: efivars: Registered efivars operations Nov 24 00:06:51.914921 kernel: PCI: Using ACPI for IRQ routing Nov 24 00:06:51.914935 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 24 00:06:51.914950 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Nov 24 00:06:51.914965 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Nov 24 00:06:51.914982 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Nov 24 00:06:51.915116 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Nov 24 00:06:51.915237 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Nov 24 00:06:51.915381 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 24 00:06:51.915400 kernel: vgaarb: loaded Nov 24 00:06:51.915415 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Nov 24 00:06:51.915431 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Nov 24 00:06:51.915447 kernel: clocksource: Switched to clocksource kvm-clock Nov 24 00:06:51.915467 kernel: VFS: Disk quotas dquot_6.6.0 Nov 24 00:06:51.915482 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 24 00:06:51.915500 kernel: pnp: PnP ACPI init Nov 24 00:06:51.915514 kernel: pnp: PnP ACPI: found 5 devices Nov 24 00:06:51.915529 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 24 00:06:51.915544 kernel: NET: Registered PF_INET protocol family Nov 24 00:06:51.915560 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 24 00:06:51.915576 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 24 00:06:51.915591 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 24 00:06:51.915611 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 24 00:06:51.915625 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 24 00:06:51.915641 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 24 00:06:51.915657 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 24 00:06:51.915671 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 24 00:06:51.915687 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 24 00:06:51.915700 kernel: NET: Registered PF_XDP protocol family Nov 24 00:06:51.915855 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 24 00:06:51.915979 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 24 00:06:51.916185 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 24 00:06:51.916316 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 24 00:06:51.916431 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Nov 24 00:06:51.916572 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 24 00:06:51.916591 kernel: PCI: CLS 0 bytes, default 64 Nov 24 00:06:51.916606 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 24 00:06:51.916622 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Nov 24 00:06:51.916637 kernel: clocksource: Switched to clocksource tsc Nov 24 00:06:51.916656 kernel: Initialise system trusted keyrings Nov 24 00:06:51.916671 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 24 00:06:51.916686 kernel: Key type asymmetric registered Nov 24 00:06:51.916701 kernel: Asymmetric key parser 'x509' registered Nov 24 00:06:51.916716 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 24 00:06:51.916730 kernel: io scheduler mq-deadline registered Nov 24 00:06:51.916745 kernel: io scheduler kyber registered Nov 24 00:06:51.916760 kernel: io scheduler bfq registered Nov 24 00:06:51.916774 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 24 00:06:51.916793 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 24 00:06:51.916808 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 24 00:06:51.916823 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 24 00:06:51.916838 kernel: i8042: Warning: Keylock active Nov 24 00:06:51.916853 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 24 00:06:51.916868 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 24 00:06:51.917016 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 24 00:06:51.917938 kernel: rtc_cmos 00:00: registered as rtc0 Nov 24 00:06:51.920156 kernel: rtc_cmos 00:00: setting system clock to 2025-11-24T00:06:51 UTC (1763942811) Nov 24 00:06:51.920303 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 24 00:06:51.920348 kernel: intel_pstate: CPU model not supported Nov 24 00:06:51.920368 kernel: efifb: probing for efifb Nov 24 00:06:51.920386 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Nov 24 00:06:51.920403 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Nov 24 00:06:51.920420 kernel: efifb: scrolling: redraw Nov 24 00:06:51.920436 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 24 00:06:51.920452 kernel: Console: switching to colour frame buffer device 100x37 Nov 24 00:06:51.920470 kernel: fb0: EFI VGA frame buffer device Nov 24 00:06:51.920487 kernel: pstore: Using crash dump compression: deflate Nov 24 00:06:51.920513 kernel: pstore: Registered efi_pstore as persistent store backend Nov 24 00:06:51.920525 kernel: NET: Registered PF_INET6 protocol family Nov 24 00:06:51.920540 kernel: Segment Routing with IPv6 Nov 24 00:06:51.920556 kernel: In-situ OAM (IOAM) with IPv6 Nov 24 00:06:51.920572 kernel: NET: Registered PF_PACKET protocol family Nov 24 00:06:51.920592 kernel: Key type dns_resolver registered Nov 24 00:06:51.920609 kernel: IPI shorthand broadcast: enabled Nov 24 00:06:51.920624 kernel: sched_clock: Marking stable (2701002143, 154769792)->(2963197651, -107425716) Nov 24 00:06:51.920639 kernel: registered taskstats version 1 Nov 24 00:06:51.920655 kernel: Loading compiled-in X.509 certificates Nov 24 00:06:51.920672 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 960cbe7f2b1ea74b5c881d6d42eea4d1ac19a607' Nov 24 00:06:51.920686 kernel: Demotion targets for Node 0: null Nov 24 00:06:51.920701 kernel: Key type .fscrypt registered Nov 24 00:06:51.920716 kernel: Key type fscrypt-provisioning registered Nov 24 00:06:51.920731 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 24 00:06:51.920745 kernel: ima: Allocated hash algorithm: sha1 Nov 24 00:06:51.920769 kernel: ima: No architecture policies found Nov 24 00:06:51.920783 kernel: clk: Disabling unused clocks Nov 24 00:06:51.920797 kernel: Warning: unable to open an initial console. Nov 24 00:06:51.920810 kernel: Freeing unused kernel image (initmem) memory: 46200K Nov 24 00:06:51.920825 kernel: Write protecting the kernel read-only data: 40960k Nov 24 00:06:51.920843 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 24 00:06:51.920858 kernel: Run /init as init process Nov 24 00:06:51.920873 kernel: with arguments: Nov 24 00:06:51.920886 kernel: /init Nov 24 00:06:51.920900 kernel: with environment: Nov 24 00:06:51.920913 kernel: HOME=/ Nov 24 00:06:51.920929 kernel: TERM=linux Nov 24 00:06:51.920946 systemd[1]: Successfully made /usr/ read-only. Nov 24 00:06:51.920965 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 00:06:51.920986 systemd[1]: Detected virtualization amazon. Nov 24 00:06:51.921003 systemd[1]: Detected architecture x86-64. Nov 24 00:06:51.921019 systemd[1]: Running in initrd. Nov 24 00:06:51.921049 systemd[1]: No hostname configured, using default hostname. Nov 24 00:06:51.921065 systemd[1]: Hostname set to . Nov 24 00:06:51.921082 systemd[1]: Initializing machine ID from VM UUID. Nov 24 00:06:51.921100 systemd[1]: Queued start job for default target initrd.target. Nov 24 00:06:51.921122 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:06:51.921140 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:06:51.921159 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 24 00:06:51.921176 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 00:06:51.921194 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 24 00:06:51.921214 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 24 00:06:51.921233 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 24 00:06:51.921254 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 24 00:06:51.921272 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:06:51.921289 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:06:51.921307 systemd[1]: Reached target paths.target - Path Units. Nov 24 00:06:51.921325 systemd[1]: Reached target slices.target - Slice Units. Nov 24 00:06:51.921343 systemd[1]: Reached target swap.target - Swaps. Nov 24 00:06:51.921362 systemd[1]: Reached target timers.target - Timer Units. Nov 24 00:06:51.921379 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 00:06:51.921400 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 00:06:51.921418 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 24 00:06:51.921436 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 24 00:06:51.921452 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:06:51.921467 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 00:06:51.921483 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:06:51.921501 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 00:06:51.921519 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 24 00:06:51.921536 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 00:06:51.921558 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 24 00:06:51.921576 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 24 00:06:51.921594 systemd[1]: Starting systemd-fsck-usr.service... Nov 24 00:06:51.921611 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 00:06:51.921629 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 00:06:51.921646 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:06:51.921662 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 24 00:06:51.921681 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:06:51.921733 systemd-journald[188]: Collecting audit messages is disabled. Nov 24 00:06:51.921774 systemd[1]: Finished systemd-fsck-usr.service. Nov 24 00:06:51.921791 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 24 00:06:51.921812 systemd-journald[188]: Journal started Nov 24 00:06:51.921845 systemd-journald[188]: Runtime Journal (/run/log/journal/ec25b90e4295cacb72ccce32b69f9fac) is 4.7M, max 38.1M, 33.3M free. Nov 24 00:06:51.920880 systemd-modules-load[190]: Inserted module 'overlay' Nov 24 00:06:51.927436 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 00:06:51.936216 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 00:06:51.946955 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:06:51.949657 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 00:06:51.958181 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 24 00:06:51.964304 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 00:06:51.974823 systemd-tmpfiles[203]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 24 00:06:51.982693 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 24 00:06:51.983822 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:06:51.992060 kernel: Bridge firewalling registered Nov 24 00:06:51.991213 systemd-modules-load[190]: Inserted module 'br_netfilter' Nov 24 00:06:51.994368 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 00:06:51.999231 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 00:06:52.002187 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:06:52.008825 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Nov 24 00:06:52.016298 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 00:06:52.019004 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 24 00:06:52.033250 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:06:52.037263 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 00:06:52.071069 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 00:06:52.117507 systemd-resolved[228]: Positive Trust Anchors: Nov 24 00:06:52.117524 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 00:06:52.117591 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 00:06:52.126435 systemd-resolved[228]: Defaulting to hostname 'linux'. Nov 24 00:06:52.127921 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 00:06:52.130736 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:06:52.181075 kernel: SCSI subsystem initialized Nov 24 00:06:52.200770 kernel: Loading iSCSI transport class v2.0-870. Nov 24 00:06:52.229058 kernel: iscsi: registered transport (tcp) Nov 24 00:06:52.254303 kernel: iscsi: registered transport (qla4xxx) Nov 24 00:06:52.254395 kernel: QLogic iSCSI HBA Driver Nov 24 00:06:52.276197 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 00:06:52.306644 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:06:52.308274 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 00:06:52.358265 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 24 00:06:52.360916 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 24 00:06:52.418086 kernel: raid6: avx512x4 gen() 17772 MB/s Nov 24 00:06:52.436061 kernel: raid6: avx512x2 gen() 17566 MB/s Nov 24 00:06:52.454078 kernel: raid6: avx512x1 gen() 17252 MB/s Nov 24 00:06:52.472062 kernel: raid6: avx2x4 gen() 17485 MB/s Nov 24 00:06:52.490066 kernel: raid6: avx2x2 gen() 17447 MB/s Nov 24 00:06:52.508472 kernel: raid6: avx2x1 gen() 13312 MB/s Nov 24 00:06:52.508546 kernel: raid6: using algorithm avx512x4 gen() 17772 MB/s Nov 24 00:06:52.527356 kernel: raid6: .... xor() 6339 MB/s, rmw enabled Nov 24 00:06:52.527431 kernel: raid6: using avx512x2 recovery algorithm Nov 24 00:06:52.549081 kernel: xor: automatically using best checksumming function avx Nov 24 00:06:52.722066 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 24 00:06:52.729159 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 24 00:06:52.731688 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:06:52.763054 systemd-udevd[437]: Using default interface naming scheme 'v255'. Nov 24 00:06:52.770628 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:06:52.775460 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 24 00:06:52.801197 dracut-pre-trigger[445]: rd.md=0: removing MD RAID activation Nov 24 00:06:52.829805 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 00:06:52.832537 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 00:06:52.896840 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:06:52.901317 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 24 00:06:52.989328 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 24 00:06:52.989661 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 24 00:06:53.002053 kernel: cryptd: max_cpu_qlen set to 1000 Nov 24 00:06:53.007069 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Nov 24 00:06:53.025077 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:e5:45:c0:85:d3 Nov 24 00:06:53.028889 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:06:53.029015 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:06:53.031500 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:06:53.034108 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:06:53.035790 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:06:53.054352 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Nov 24 00:06:53.054411 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:06:53.054543 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:06:53.057605 (udev-worker)[482]: Network interface NamePolicy= disabled on kernel command line. Nov 24 00:06:53.059019 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:06:53.065073 kernel: AES CTR mode by8 optimization enabled Nov 24 00:06:53.087059 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 24 00:06:53.093060 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 24 00:06:53.110611 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:06:53.114074 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 24 00:06:53.122203 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 24 00:06:53.122280 kernel: GPT:9289727 != 33554431 Nov 24 00:06:53.122294 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 24 00:06:53.123618 kernel: GPT:9289727 != 33554431 Nov 24 00:06:53.125212 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 24 00:06:53.128193 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 24 00:06:53.175063 kernel: nvme nvme0: using unchecked data buffer Nov 24 00:06:53.286246 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 24 00:06:53.316734 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 24 00:06:53.317714 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 24 00:06:53.338860 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 24 00:06:53.349490 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 24 00:06:53.350323 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Nov 24 00:06:53.351935 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 00:06:53.353135 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:06:53.354287 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 00:06:53.356200 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 24 00:06:53.359063 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 24 00:06:53.379084 disk-uuid[669]: Primary Header is updated. Nov 24 00:06:53.379084 disk-uuid[669]: Secondary Entries is updated. Nov 24 00:06:53.379084 disk-uuid[669]: Secondary Header is updated. Nov 24 00:06:53.388094 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 24 00:06:53.389261 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 24 00:06:54.407064 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 24 00:06:54.407144 disk-uuid[672]: The operation has completed successfully. Nov 24 00:06:54.561455 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 24 00:06:54.561615 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 24 00:06:54.601314 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 24 00:06:54.624835 sh[937]: Success Nov 24 00:06:54.652653 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 24 00:06:54.652766 kernel: device-mapper: uevent: version 1.0.3 Nov 24 00:06:54.653544 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 24 00:06:54.667108 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Nov 24 00:06:54.772084 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 24 00:06:54.777155 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 24 00:06:54.795547 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 24 00:06:54.815059 kernel: BTRFS: device fsid 3af95a3e-5df6-49e0-91e3-ddf2109f68c7 devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (960) Nov 24 00:06:54.818068 kernel: BTRFS info (device dm-0): first mount of filesystem 3af95a3e-5df6-49e0-91e3-ddf2109f68c7 Nov 24 00:06:54.818152 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:06:54.915991 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 24 00:06:54.916085 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 24 00:06:54.916100 kernel: BTRFS info (device dm-0): enabling free space tree Nov 24 00:06:54.942953 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 24 00:06:54.944593 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 24 00:06:54.945660 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 24 00:06:54.946863 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 24 00:06:54.949871 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 24 00:06:54.987065 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (991) Nov 24 00:06:54.993685 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:06:54.993775 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:06:55.000855 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 24 00:06:55.000954 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 24 00:06:55.009074 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:06:55.009995 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 24 00:06:55.015238 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 24 00:06:55.073167 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 00:06:55.076474 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 00:06:55.124734 systemd-networkd[1129]: lo: Link UP Nov 24 00:06:55.124749 systemd-networkd[1129]: lo: Gained carrier Nov 24 00:06:55.126700 systemd-networkd[1129]: Enumeration completed Nov 24 00:06:55.126852 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 00:06:55.127183 systemd-networkd[1129]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:06:55.127189 systemd-networkd[1129]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 00:06:55.128010 systemd[1]: Reached target network.target - Network. Nov 24 00:06:55.131563 systemd-networkd[1129]: eth0: Link UP Nov 24 00:06:55.131569 systemd-networkd[1129]: eth0: Gained carrier Nov 24 00:06:55.131588 systemd-networkd[1129]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:06:55.148197 systemd-networkd[1129]: eth0: DHCPv4 address 172.31.16.87/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 24 00:06:55.537171 ignition[1056]: Ignition 2.22.0 Nov 24 00:06:55.537187 ignition[1056]: Stage: fetch-offline Nov 24 00:06:55.537454 ignition[1056]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:06:55.537467 ignition[1056]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 24 00:06:55.538295 ignition[1056]: Ignition finished successfully Nov 24 00:06:55.541145 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 00:06:55.542818 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 24 00:06:55.580287 ignition[1139]: Ignition 2.22.0 Nov 24 00:06:55.580308 ignition[1139]: Stage: fetch Nov 24 00:06:55.580914 ignition[1139]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:06:55.580928 ignition[1139]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 24 00:06:55.581074 ignition[1139]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 24 00:06:55.635607 ignition[1139]: PUT result: OK Nov 24 00:06:55.639422 ignition[1139]: parsed url from cmdline: "" Nov 24 00:06:55.639437 ignition[1139]: no config URL provided Nov 24 00:06:55.639448 ignition[1139]: reading system config file "/usr/lib/ignition/user.ign" Nov 24 00:06:55.639464 ignition[1139]: no config at "/usr/lib/ignition/user.ign" Nov 24 00:06:55.639492 ignition[1139]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 24 00:06:55.650145 ignition[1139]: PUT result: OK Nov 24 00:06:55.650296 ignition[1139]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 24 00:06:55.658667 ignition[1139]: GET result: OK Nov 24 00:06:55.658870 ignition[1139]: parsing config with SHA512: a856dec8e9c5e5753e0fbabf3d20dfe573d4f961a32cdacb4372b1f17db91d451898009b4ea0f54c8e8215045438d5647c9dfa86300aed099131a9fe418e3ee4 Nov 24 00:06:55.666188 unknown[1139]: fetched base config from "system" Nov 24 00:06:55.666215 unknown[1139]: fetched base config from "system" Nov 24 00:06:55.666224 unknown[1139]: fetched user config from "aws" Nov 24 00:06:55.667655 ignition[1139]: fetch: fetch complete Nov 24 00:06:55.667664 ignition[1139]: fetch: fetch passed Nov 24 00:06:55.667770 ignition[1139]: Ignition finished successfully Nov 24 00:06:55.670713 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 24 00:06:55.673420 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 24 00:06:55.717121 ignition[1146]: Ignition 2.22.0 Nov 24 00:06:55.717137 ignition[1146]: Stage: kargs Nov 24 00:06:55.717596 ignition[1146]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:06:55.717610 ignition[1146]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 24 00:06:55.717747 ignition[1146]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 24 00:06:55.719354 ignition[1146]: PUT result: OK Nov 24 00:06:55.722380 ignition[1146]: kargs: kargs passed Nov 24 00:06:55.722462 ignition[1146]: Ignition finished successfully Nov 24 00:06:55.725410 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 24 00:06:55.726995 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 24 00:06:55.764732 ignition[1152]: Ignition 2.22.0 Nov 24 00:06:55.764750 ignition[1152]: Stage: disks Nov 24 00:06:55.765756 ignition[1152]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:06:55.765771 ignition[1152]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 24 00:06:55.765894 ignition[1152]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 24 00:06:55.766984 ignition[1152]: PUT result: OK Nov 24 00:06:55.769930 ignition[1152]: disks: disks passed Nov 24 00:06:55.770014 ignition[1152]: Ignition finished successfully Nov 24 00:06:55.772598 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 24 00:06:55.773345 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 24 00:06:55.773753 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 24 00:06:55.774377 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 00:06:55.774946 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 00:06:55.775523 systemd[1]: Reached target basic.target - Basic System. Nov 24 00:06:55.777498 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 24 00:06:55.820594 systemd-fsck[1160]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 24 00:06:55.824276 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 24 00:06:55.826744 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 24 00:06:56.136058 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f89e2a65-2a4a-426b-9659-02844cc29a2a r/w with ordered data mode. Quota mode: none. Nov 24 00:06:56.137172 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 24 00:06:56.138215 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 24 00:06:56.141367 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 00:06:56.145165 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 24 00:06:56.147211 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 24 00:06:56.148610 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 24 00:06:56.148659 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 00:06:56.157880 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 24 00:06:56.160899 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 24 00:06:56.172188 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1179) Nov 24 00:06:56.177058 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:06:56.177143 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:06:56.188144 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 24 00:06:56.188251 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 24 00:06:56.189993 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 00:06:56.536789 initrd-setup-root[1203]: cut: /sysroot/etc/passwd: No such file or directory Nov 24 00:06:56.577093 initrd-setup-root[1210]: cut: /sysroot/etc/group: No such file or directory Nov 24 00:06:56.583412 initrd-setup-root[1217]: cut: /sysroot/etc/shadow: No such file or directory Nov 24 00:06:56.589512 initrd-setup-root[1224]: cut: /sysroot/etc/gshadow: No such file or directory Nov 24 00:06:56.693557 systemd-networkd[1129]: eth0: Gained IPv6LL Nov 24 00:06:56.852650 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 24 00:06:56.855256 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 24 00:06:56.859206 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 24 00:06:56.875436 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 24 00:06:56.878059 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:06:56.905887 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 24 00:06:56.917182 ignition[1291]: INFO : Ignition 2.22.0 Nov 24 00:06:56.917182 ignition[1291]: INFO : Stage: mount Nov 24 00:06:56.919082 ignition[1291]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:06:56.919082 ignition[1291]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 24 00:06:56.919082 ignition[1291]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 24 00:06:56.920948 ignition[1291]: INFO : PUT result: OK Nov 24 00:06:56.921996 ignition[1291]: INFO : mount: mount passed Nov 24 00:06:56.923121 ignition[1291]: INFO : Ignition finished successfully Nov 24 00:06:56.924120 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 24 00:06:56.926141 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 24 00:06:57.139179 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 00:06:57.184126 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1304) Nov 24 00:06:57.188276 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:06:57.188359 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:06:57.199100 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 24 00:06:57.199180 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 24 00:06:57.203141 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 00:06:57.245840 ignition[1321]: INFO : Ignition 2.22.0 Nov 24 00:06:57.245840 ignition[1321]: INFO : Stage: files Nov 24 00:06:57.247879 ignition[1321]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:06:57.247879 ignition[1321]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 24 00:06:57.247879 ignition[1321]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 24 00:06:57.249527 ignition[1321]: INFO : PUT result: OK Nov 24 00:06:57.252476 ignition[1321]: DEBUG : files: compiled without relabeling support, skipping Nov 24 00:06:57.253938 ignition[1321]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 24 00:06:57.253938 ignition[1321]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 24 00:06:57.274317 ignition[1321]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 24 00:06:57.275575 ignition[1321]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 24 00:06:57.278405 unknown[1321]: wrote ssh authorized keys file for user: core Nov 24 00:06:57.279101 ignition[1321]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 24 00:06:57.290506 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 24 00:06:57.291476 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 24 00:06:57.367999 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 24 00:06:57.666161 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 24 00:06:57.666161 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 24 00:06:57.675043 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 24 00:06:57.675043 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 24 00:06:57.675043 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 24 00:06:57.675043 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 00:06:57.675043 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 00:06:57.675043 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 00:06:57.675043 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 00:06:57.675043 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 00:06:57.675043 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 00:06:57.675043 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:06:57.683074 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:06:57.683074 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:06:57.683074 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 24 00:06:58.148874 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 24 00:06:58.606863 ignition[1321]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:06:58.606863 ignition[1321]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 24 00:06:58.618463 ignition[1321]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 00:06:58.623682 ignition[1321]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 00:06:58.623682 ignition[1321]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 24 00:06:58.623682 ignition[1321]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 24 00:06:58.628918 ignition[1321]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 24 00:06:58.628918 ignition[1321]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 24 00:06:58.628918 ignition[1321]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 24 00:06:58.628918 ignition[1321]: INFO : files: files passed Nov 24 00:06:58.628918 ignition[1321]: INFO : Ignition finished successfully Nov 24 00:06:58.628199 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 24 00:06:58.630783 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 24 00:06:58.637387 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 24 00:06:58.647434 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 24 00:06:58.647610 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 24 00:06:58.666088 initrd-setup-root-after-ignition[1351]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:06:58.666088 initrd-setup-root-after-ignition[1351]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:06:58.669142 initrd-setup-root-after-ignition[1355]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:06:58.670865 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 00:06:58.671542 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 24 00:06:58.674113 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 24 00:06:58.761772 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 24 00:06:58.761965 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 24 00:06:58.763374 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 24 00:06:58.764736 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 24 00:06:58.765696 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 24 00:06:58.767010 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 24 00:06:58.807215 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 00:06:58.810248 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 24 00:06:58.837105 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:06:58.837931 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:06:58.839111 systemd[1]: Stopped target timers.target - Timer Units. Nov 24 00:06:58.840207 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 24 00:06:58.840412 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 00:06:58.841736 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 24 00:06:58.842707 systemd[1]: Stopped target basic.target - Basic System. Nov 24 00:06:58.843551 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 24 00:06:58.844868 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 00:06:58.845730 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 24 00:06:58.846557 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 24 00:06:58.847404 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 24 00:06:58.848476 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 00:06:58.849390 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 24 00:06:58.850607 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 24 00:06:58.851410 systemd[1]: Stopped target swap.target - Swaps. Nov 24 00:06:58.852310 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 24 00:06:58.852557 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 24 00:06:58.853600 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:06:58.854442 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:06:58.855121 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 24 00:06:58.856060 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:06:58.856624 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 24 00:06:58.856822 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 24 00:06:58.858333 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 24 00:06:58.858617 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 00:06:58.859303 systemd[1]: ignition-files.service: Deactivated successfully. Nov 24 00:06:58.859511 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 24 00:06:58.861452 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 24 00:06:58.863678 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 24 00:06:58.864062 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:06:58.867293 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 24 00:06:58.868173 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 24 00:06:58.868399 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:06:58.870900 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 24 00:06:58.871171 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 00:06:58.882521 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 24 00:06:58.883899 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 24 00:06:58.908673 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 24 00:06:58.911170 ignition[1375]: INFO : Ignition 2.22.0 Nov 24 00:06:58.911170 ignition[1375]: INFO : Stage: umount Nov 24 00:06:58.913615 ignition[1375]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:06:58.913615 ignition[1375]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 24 00:06:58.913615 ignition[1375]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 24 00:06:58.916390 ignition[1375]: INFO : PUT result: OK Nov 24 00:06:58.917375 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 24 00:06:58.917524 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 24 00:06:58.919848 ignition[1375]: INFO : umount: umount passed Nov 24 00:06:58.919848 ignition[1375]: INFO : Ignition finished successfully Nov 24 00:06:58.921707 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 24 00:06:58.921854 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 24 00:06:58.923302 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 24 00:06:58.923438 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 24 00:06:58.924119 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 24 00:06:58.924194 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 24 00:06:58.924815 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 24 00:06:58.924878 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 24 00:06:58.925504 systemd[1]: Stopped target network.target - Network. Nov 24 00:06:58.926152 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 24 00:06:58.926222 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 00:06:58.927309 systemd[1]: Stopped target paths.target - Path Units. Nov 24 00:06:58.928070 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 24 00:06:58.932143 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:06:58.932734 systemd[1]: Stopped target slices.target - Slice Units. Nov 24 00:06:58.933878 systemd[1]: Stopped target sockets.target - Socket Units. Nov 24 00:06:58.935216 systemd[1]: iscsid.socket: Deactivated successfully. Nov 24 00:06:58.935290 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 00:06:58.936098 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 24 00:06:58.936165 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 00:06:58.936785 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 24 00:06:58.936870 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 24 00:06:58.937480 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 24 00:06:58.937543 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 24 00:06:58.938134 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 24 00:06:58.938204 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 24 00:06:58.938974 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 24 00:06:58.939659 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 24 00:06:58.943468 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 24 00:06:58.944082 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 24 00:06:58.949952 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 24 00:06:58.950376 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 24 00:06:58.950545 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 24 00:06:58.953060 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 24 00:06:58.954014 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 24 00:06:58.955222 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 24 00:06:58.955284 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:06:58.957169 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 24 00:06:58.957713 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 24 00:06:58.957792 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 00:06:58.958493 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 24 00:06:58.958556 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:06:58.961195 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 24 00:06:58.961274 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 24 00:06:58.962242 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 24 00:06:58.962320 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:06:58.963227 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:06:58.968827 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 24 00:06:58.968942 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:06:58.979307 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 24 00:06:58.979515 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:06:58.984580 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 24 00:06:58.984662 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 24 00:06:58.985530 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 24 00:06:58.985581 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:06:58.986361 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 24 00:06:58.986433 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 24 00:06:58.987883 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 24 00:06:58.987952 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 24 00:06:58.989133 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 24 00:06:58.989203 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 00:06:59.006208 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 24 00:06:59.007335 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 24 00:06:59.008179 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:06:59.009738 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 24 00:06:59.010177 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:06:59.011154 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:06:59.011210 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:06:59.014427 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 24 00:06:59.015306 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 24 00:06:59.015350 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:06:59.015951 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 24 00:06:59.016078 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 24 00:06:59.023263 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 24 00:06:59.023458 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 24 00:06:59.025630 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 24 00:06:59.027611 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 24 00:06:59.050709 systemd[1]: Switching root. Nov 24 00:06:59.092779 systemd-journald[188]: Journal stopped Nov 24 00:07:02.803495 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Nov 24 00:07:02.803630 kernel: SELinux: policy capability network_peer_controls=1 Nov 24 00:07:02.803662 kernel: SELinux: policy capability open_perms=1 Nov 24 00:07:02.803686 kernel: SELinux: policy capability extended_socket_class=1 Nov 24 00:07:02.803715 kernel: SELinux: policy capability always_check_network=0 Nov 24 00:07:02.803742 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 24 00:07:02.803770 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 24 00:07:02.803792 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 24 00:07:02.803815 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 24 00:07:02.803837 kernel: SELinux: policy capability userspace_initial_context=0 Nov 24 00:07:02.803860 kernel: audit: type=1403 audit(1763942819.713:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 24 00:07:02.803887 systemd[1]: Successfully loaded SELinux policy in 99.523ms. Nov 24 00:07:02.803934 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.590ms. Nov 24 00:07:02.803967 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 00:07:02.803995 systemd[1]: Detected virtualization amazon. Nov 24 00:07:02.804017 systemd[1]: Detected architecture x86-64. Nov 24 00:07:02.811366 systemd[1]: Detected first boot. Nov 24 00:07:02.811404 systemd[1]: Initializing machine ID from VM UUID. Nov 24 00:07:02.811425 zram_generator::config[1419]: No configuration found. Nov 24 00:07:02.811445 kernel: Guest personality initialized and is inactive Nov 24 00:07:02.811467 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Nov 24 00:07:02.811485 kernel: Initialized host personality Nov 24 00:07:02.811511 kernel: NET: Registered PF_VSOCK protocol family Nov 24 00:07:02.811529 systemd[1]: Populated /etc with preset unit settings. Nov 24 00:07:02.811552 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 24 00:07:02.811578 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 24 00:07:02.811598 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 24 00:07:02.811617 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 24 00:07:02.811638 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 24 00:07:02.811658 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 24 00:07:02.811677 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 24 00:07:02.811708 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 24 00:07:02.811729 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 24 00:07:02.811749 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 24 00:07:02.811769 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 24 00:07:02.811789 systemd[1]: Created slice user.slice - User and Session Slice. Nov 24 00:07:02.811809 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:07:02.811829 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:07:02.811848 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 24 00:07:02.811873 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 24 00:07:02.811893 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 24 00:07:02.811912 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 00:07:02.811932 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 24 00:07:02.811952 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:07:02.811971 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:07:02.811990 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 24 00:07:02.812009 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 24 00:07:02.816310 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 24 00:07:02.816358 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 24 00:07:02.816377 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:07:02.816397 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 00:07:02.816416 systemd[1]: Reached target slices.target - Slice Units. Nov 24 00:07:02.816435 systemd[1]: Reached target swap.target - Swaps. Nov 24 00:07:02.816454 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 24 00:07:02.816474 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 24 00:07:02.816496 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 24 00:07:02.816525 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:07:02.816546 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 00:07:02.816566 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:07:02.816585 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 24 00:07:02.816603 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 24 00:07:02.816622 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 24 00:07:02.816642 systemd[1]: Mounting media.mount - External Media Directory... Nov 24 00:07:02.816661 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:07:02.816681 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 24 00:07:02.816708 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 24 00:07:02.816730 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 24 00:07:02.816753 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 24 00:07:02.816776 systemd[1]: Reached target machines.target - Containers. Nov 24 00:07:02.816798 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 24 00:07:02.816822 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:07:02.816843 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 00:07:02.816864 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 24 00:07:02.816891 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:07:02.816911 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 00:07:02.816932 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:07:02.816952 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 24 00:07:02.816972 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:07:02.816992 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 24 00:07:02.817013 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 24 00:07:02.822566 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 24 00:07:02.822619 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 24 00:07:02.822654 systemd[1]: Stopped systemd-fsck-usr.service. Nov 24 00:07:02.822680 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:07:02.822705 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 00:07:02.822728 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 00:07:02.822751 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 00:07:02.822777 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 24 00:07:02.822800 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 24 00:07:02.822826 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 00:07:02.822855 systemd[1]: verity-setup.service: Deactivated successfully. Nov 24 00:07:02.822878 kernel: loop: module loaded Nov 24 00:07:02.822907 systemd[1]: Stopped verity-setup.service. Nov 24 00:07:02.822933 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:07:02.822958 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 24 00:07:02.822982 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 24 00:07:02.823007 systemd[1]: Mounted media.mount - External Media Directory. Nov 24 00:07:02.825877 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 24 00:07:02.825930 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 24 00:07:02.825955 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 24 00:07:02.826061 systemd-journald[1505]: Collecting audit messages is disabled. Nov 24 00:07:02.826117 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:07:02.826143 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 24 00:07:02.826166 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 24 00:07:02.826193 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:07:02.826217 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:07:02.826240 kernel: fuse: init (API version 7.41) Nov 24 00:07:02.826265 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:07:02.826294 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:07:02.826319 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:07:02.826344 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:07:02.826369 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 24 00:07:02.826395 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 24 00:07:02.826419 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 00:07:02.826445 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:07:02.826472 systemd-journald[1505]: Journal started Nov 24 00:07:02.826522 systemd-journald[1505]: Runtime Journal (/run/log/journal/ec25b90e4295cacb72ccce32b69f9fac) is 4.7M, max 38.1M, 33.3M free. Nov 24 00:07:02.054726 systemd[1]: Queued start job for default target multi-user.target. Nov 24 00:07:02.831158 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 00:07:02.080920 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 24 00:07:02.081454 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 24 00:07:02.834746 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 24 00:07:02.838502 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 24 00:07:02.851971 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 24 00:07:02.870844 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 00:07:02.876387 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 24 00:07:02.882177 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 24 00:07:02.887372 kernel: ACPI: bus type drm_connector registered Nov 24 00:07:02.885466 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 24 00:07:02.885541 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 00:07:02.891868 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 24 00:07:02.897902 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 24 00:07:02.899065 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:07:02.902963 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 24 00:07:02.908305 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 24 00:07:02.909151 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 00:07:02.912161 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 24 00:07:02.914190 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 00:07:02.917340 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 00:07:02.926373 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 24 00:07:02.933554 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 24 00:07:02.937005 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 00:07:02.939388 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 00:07:02.940631 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 24 00:07:02.941681 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 24 00:07:02.955415 systemd-journald[1505]: Time spent on flushing to /var/log/journal/ec25b90e4295cacb72ccce32b69f9fac is 110.671ms for 1018 entries. Nov 24 00:07:02.955415 systemd-journald[1505]: System Journal (/var/log/journal/ec25b90e4295cacb72ccce32b69f9fac) is 8M, max 195.6M, 187.6M free. Nov 24 00:07:03.082385 systemd-journald[1505]: Received client request to flush runtime journal. Nov 24 00:07:03.082451 kernel: loop0: detected capacity change from 0 to 110984 Nov 24 00:07:02.977707 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 24 00:07:02.978641 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 24 00:07:02.986257 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 24 00:07:03.011780 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:07:03.025718 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:07:03.086448 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 24 00:07:03.094873 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 24 00:07:03.100325 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 00:07:03.102810 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 24 00:07:03.105779 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 24 00:07:03.162061 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 24 00:07:03.237569 kernel: loop1: detected capacity change from 0 to 72368 Nov 24 00:07:03.252296 systemd-tmpfiles[1567]: ACLs are not supported, ignoring. Nov 24 00:07:03.252327 systemd-tmpfiles[1567]: ACLs are not supported, ignoring. Nov 24 00:07:03.267438 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:07:03.320902 kernel: loop2: detected capacity change from 0 to 128560 Nov 24 00:07:03.457362 kernel: loop3: detected capacity change from 0 to 229808 Nov 24 00:07:03.508063 kernel: loop4: detected capacity change from 0 to 110984 Nov 24 00:07:03.537037 kernel: loop5: detected capacity change from 0 to 72368 Nov 24 00:07:03.576058 kernel: loop6: detected capacity change from 0 to 128560 Nov 24 00:07:03.635073 kernel: loop7: detected capacity change from 0 to 229808 Nov 24 00:07:03.670472 (sd-merge)[1576]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Nov 24 00:07:03.675103 (sd-merge)[1576]: Merged extensions into '/usr'. Nov 24 00:07:03.686339 systemd[1]: Reload requested from client PID 1552 ('systemd-sysext') (unit systemd-sysext.service)... Nov 24 00:07:03.686368 systemd[1]: Reloading... Nov 24 00:07:03.874440 zram_generator::config[1602]: No configuration found. Nov 24 00:07:04.329518 systemd[1]: Reloading finished in 642 ms. Nov 24 00:07:04.358335 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 24 00:07:04.379269 systemd[1]: Starting ensure-sysext.service... Nov 24 00:07:04.383021 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 00:07:04.418161 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 24 00:07:04.430819 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:07:04.431890 systemd[1]: Reload requested from client PID 1653 ('systemctl') (unit ensure-sysext.service)... Nov 24 00:07:04.431905 systemd[1]: Reloading... Nov 24 00:07:04.440661 systemd-tmpfiles[1654]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 24 00:07:04.440704 systemd-tmpfiles[1654]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 24 00:07:04.441144 systemd-tmpfiles[1654]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 24 00:07:04.441548 systemd-tmpfiles[1654]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 24 00:07:04.444950 systemd-tmpfiles[1654]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 24 00:07:04.445421 systemd-tmpfiles[1654]: ACLs are not supported, ignoring. Nov 24 00:07:04.445512 systemd-tmpfiles[1654]: ACLs are not supported, ignoring. Nov 24 00:07:04.453644 systemd-tmpfiles[1654]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 00:07:04.453663 systemd-tmpfiles[1654]: Skipping /boot Nov 24 00:07:04.488378 systemd-tmpfiles[1654]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 00:07:04.488397 systemd-tmpfiles[1654]: Skipping /boot Nov 24 00:07:04.516730 systemd-udevd[1657]: Using default interface naming scheme 'v255'. Nov 24 00:07:04.556082 zram_generator::config[1679]: No configuration found. Nov 24 00:07:04.821339 ldconfig[1547]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 24 00:07:04.940394 (udev-worker)[1715]: Network interface NamePolicy= disabled on kernel command line. Nov 24 00:07:04.997054 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Nov 24 00:07:05.002059 kernel: mousedev: PS/2 mouse device common for all mice Nov 24 00:07:05.011087 kernel: ACPI: button: Power Button [PWRF] Nov 24 00:07:05.013131 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Nov 24 00:07:05.017057 kernel: ACPI: button: Sleep Button [SLPF] Nov 24 00:07:05.065095 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Nov 24 00:07:05.188325 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 24 00:07:05.188487 systemd[1]: Reloading finished in 755 ms. Nov 24 00:07:05.201785 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:07:05.204236 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 24 00:07:05.205506 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:07:05.238279 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 00:07:05.243407 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 24 00:07:05.246334 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 24 00:07:05.254367 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 00:07:05.261289 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 00:07:05.270356 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 24 00:07:05.282731 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:07:05.283063 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:07:05.293443 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:07:05.302146 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:07:05.307926 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:07:05.308831 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:07:05.309056 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:07:05.309209 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:07:05.316165 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 24 00:07:05.327094 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:07:05.327522 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:07:05.327893 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:07:05.328523 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:07:05.328772 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:07:05.339331 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:07:05.339764 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:07:05.345403 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 00:07:05.346827 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:07:05.347137 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:07:05.348309 systemd[1]: Reached target time-set.target - System Time Set. Nov 24 00:07:05.349134 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:07:05.351749 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 24 00:07:05.361867 systemd[1]: Finished ensure-sysext.service. Nov 24 00:07:05.397398 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 24 00:07:05.410921 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 24 00:07:05.441089 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:07:05.446277 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:07:05.449724 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 00:07:05.470232 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:07:05.470576 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:07:05.474667 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 24 00:07:05.492531 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:07:05.492829 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:07:05.493788 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 00:07:05.501570 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 00:07:05.501869 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 00:07:05.506727 augenrules[1892]: No rules Nov 24 00:07:05.510786 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 00:07:05.513210 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 00:07:05.522929 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 24 00:07:05.524332 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 24 00:07:05.562763 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 24 00:07:05.640829 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:07:05.685360 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:07:05.685656 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:07:05.690392 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:07:05.763932 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 24 00:07:05.771367 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 24 00:07:05.830806 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 24 00:07:05.841558 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:07:05.849582 systemd-networkd[1838]: lo: Link UP Nov 24 00:07:05.849599 systemd-networkd[1838]: lo: Gained carrier Nov 24 00:07:05.851427 systemd-networkd[1838]: Enumeration completed Nov 24 00:07:05.851804 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 00:07:05.852104 systemd-networkd[1838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:07:05.852117 systemd-networkd[1838]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 00:07:05.857011 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 24 00:07:05.858600 systemd-networkd[1838]: eth0: Link UP Nov 24 00:07:05.859091 systemd-networkd[1838]: eth0: Gained carrier Nov 24 00:07:05.859214 systemd-networkd[1838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:07:05.861519 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 24 00:07:05.869927 systemd-resolved[1840]: Positive Trust Anchors: Nov 24 00:07:05.870049 systemd-networkd[1838]: eth0: DHCPv4 address 172.31.16.87/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 24 00:07:05.871434 systemd-resolved[1840]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 00:07:05.871558 systemd-resolved[1840]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 00:07:05.881495 systemd-resolved[1840]: Defaulting to hostname 'linux'. Nov 24 00:07:05.886522 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 00:07:05.887445 systemd[1]: Reached target network.target - Network. Nov 24 00:07:05.888532 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:07:05.889240 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 00:07:05.889914 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 24 00:07:05.890507 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 24 00:07:05.890918 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 24 00:07:05.891527 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 24 00:07:05.892116 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 24 00:07:05.892488 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 24 00:07:05.892918 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 24 00:07:05.892955 systemd[1]: Reached target paths.target - Path Units. Nov 24 00:07:05.893485 systemd[1]: Reached target timers.target - Timer Units. Nov 24 00:07:05.896192 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 24 00:07:05.900050 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 24 00:07:05.903627 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 24 00:07:05.904859 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 24 00:07:05.905545 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 24 00:07:05.909717 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 24 00:07:05.910828 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 24 00:07:05.913514 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 24 00:07:05.914677 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 24 00:07:05.916825 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 00:07:05.917899 systemd[1]: Reached target basic.target - Basic System. Nov 24 00:07:05.918539 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 24 00:07:05.918588 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 24 00:07:05.920054 systemd[1]: Starting containerd.service - containerd container runtime... Nov 24 00:07:05.924243 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 24 00:07:05.928318 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 24 00:07:05.936157 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 24 00:07:05.940758 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 24 00:07:05.945407 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 24 00:07:05.946124 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 24 00:07:05.956051 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 24 00:07:05.965360 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 24 00:07:05.971387 systemd[1]: Started ntpd.service - Network Time Service. Nov 24 00:07:05.987630 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 24 00:07:05.990068 jq[1941]: false Nov 24 00:07:06.001456 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 24 00:07:06.010335 google_oslogin_nss_cache[1943]: oslogin_cache_refresh[1943]: Refreshing passwd entry cache Nov 24 00:07:06.010358 oslogin_cache_refresh[1943]: Refreshing passwd entry cache Nov 24 00:07:06.011295 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 24 00:07:06.016459 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 24 00:07:06.031512 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 24 00:07:06.035608 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 24 00:07:06.037436 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 24 00:07:06.046944 google_oslogin_nss_cache[1943]: oslogin_cache_refresh[1943]: Failure getting users, quitting Nov 24 00:07:06.046944 google_oslogin_nss_cache[1943]: oslogin_cache_refresh[1943]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 00:07:06.046944 google_oslogin_nss_cache[1943]: oslogin_cache_refresh[1943]: Refreshing group entry cache Nov 24 00:07:06.045329 systemd[1]: Starting update-engine.service - Update Engine... Nov 24 00:07:06.043881 oslogin_cache_refresh[1943]: Failure getting users, quitting Nov 24 00:07:06.043909 oslogin_cache_refresh[1943]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 00:07:06.043975 oslogin_cache_refresh[1943]: Refreshing group entry cache Nov 24 00:07:06.054504 oslogin_cache_refresh[1943]: Failure getting groups, quitting Nov 24 00:07:06.050275 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 24 00:07:06.057891 google_oslogin_nss_cache[1943]: oslogin_cache_refresh[1943]: Failure getting groups, quitting Nov 24 00:07:06.057891 google_oslogin_nss_cache[1943]: oslogin_cache_refresh[1943]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 00:07:06.054524 oslogin_cache_refresh[1943]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 00:07:06.056847 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 24 00:07:06.058572 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 24 00:07:06.059112 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 24 00:07:06.074850 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 24 00:07:06.076346 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 24 00:07:06.091534 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 24 00:07:06.093083 jq[1966]: true Nov 24 00:07:06.102117 extend-filesystems[1942]: Found /dev/nvme0n1p6 Nov 24 00:07:06.101013 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 24 00:07:06.103889 systemd[1]: motdgen.service: Deactivated successfully. Nov 24 00:07:06.104194 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 24 00:07:06.113069 extend-filesystems[1942]: Found /dev/nvme0n1p9 Nov 24 00:07:06.138727 extend-filesystems[1942]: Checking size of /dev/nvme0n1p9 Nov 24 00:07:06.148748 tar[1976]: linux-amd64/LICENSE Nov 24 00:07:06.148748 tar[1976]: linux-amd64/helm Nov 24 00:07:06.167359 (ntainerd)[1972]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 24 00:07:06.231843 coreos-metadata[1938]: Nov 24 00:07:06.230 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 24 00:07:06.232274 update_engine[1961]: I20251124 00:07:06.231323 1961 main.cc:92] Flatcar Update Engine starting Nov 24 00:07:06.238972 jq[1978]: true Nov 24 00:07:06.242152 extend-filesystems[1942]: Resized partition /dev/nvme0n1p9 Nov 24 00:07:06.242896 coreos-metadata[1938]: Nov 24 00:07:06.242 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 24 00:07:06.252873 coreos-metadata[1938]: Nov 24 00:07:06.252 INFO Fetch successful Nov 24 00:07:06.252989 coreos-metadata[1938]: Nov 24 00:07:06.252 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 24 00:07:06.259401 coreos-metadata[1938]: Nov 24 00:07:06.259 INFO Fetch successful Nov 24 00:07:06.259521 coreos-metadata[1938]: Nov 24 00:07:06.259 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 24 00:07:06.263062 extend-filesystems[1997]: resize2fs 1.47.3 (8-Jul-2025) Nov 24 00:07:06.270425 coreos-metadata[1938]: Nov 24 00:07:06.267 INFO Fetch successful Nov 24 00:07:06.270425 coreos-metadata[1938]: Nov 24 00:07:06.267 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 24 00:07:06.270624 coreos-metadata[1938]: Nov 24 00:07:06.270 INFO Fetch successful Nov 24 00:07:06.270673 coreos-metadata[1938]: Nov 24 00:07:06.270 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 24 00:07:06.277575 coreos-metadata[1938]: Nov 24 00:07:06.277 INFO Fetch failed with 404: resource not found Nov 24 00:07:06.277725 coreos-metadata[1938]: Nov 24 00:07:06.277 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 24 00:07:06.281069 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Nov 24 00:07:06.282103 coreos-metadata[1938]: Nov 24 00:07:06.281 INFO Fetch successful Nov 24 00:07:06.282103 coreos-metadata[1938]: Nov 24 00:07:06.282 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 24 00:07:06.283203 coreos-metadata[1938]: Nov 24 00:07:06.282 INFO Fetch successful Nov 24 00:07:06.283300 coreos-metadata[1938]: Nov 24 00:07:06.283 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 24 00:07:06.284944 coreos-metadata[1938]: Nov 24 00:07:06.284 INFO Fetch successful Nov 24 00:07:06.284944 coreos-metadata[1938]: Nov 24 00:07:06.284 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 24 00:07:06.285731 dbus-daemon[1939]: [system] SELinux support is enabled Nov 24 00:07:06.287808 ntpd[1945]: 24 Nov 00:07:06 ntpd[1945]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:13:58 UTC 2025 (1): Starting Nov 24 00:07:06.287808 ntpd[1945]: 24 Nov 00:07:06 ntpd[1945]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 24 00:07:06.287808 ntpd[1945]: 24 Nov 00:07:06 ntpd[1945]: ---------------------------------------------------- Nov 24 00:07:06.287808 ntpd[1945]: 24 Nov 00:07:06 ntpd[1945]: ntp-4 is maintained by Network Time Foundation, Nov 24 00:07:06.287808 ntpd[1945]: 24 Nov 00:07:06 ntpd[1945]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 24 00:07:06.287808 ntpd[1945]: 24 Nov 00:07:06 ntpd[1945]: corporation. Support and training for ntp-4 are Nov 24 00:07:06.287808 ntpd[1945]: 24 Nov 00:07:06 ntpd[1945]: available at https://www.nwtime.org/support Nov 24 00:07:06.287808 ntpd[1945]: 24 Nov 00:07:06 ntpd[1945]: ---------------------------------------------------- Nov 24 00:07:06.285964 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 24 00:07:06.286059 ntpd[1945]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:13:58 UTC 2025 (1): Starting Nov 24 00:07:06.286122 ntpd[1945]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 24 00:07:06.286134 ntpd[1945]: ---------------------------------------------------- Nov 24 00:07:06.286144 ntpd[1945]: ntp-4 is maintained by Network Time Foundation, Nov 24 00:07:06.286154 ntpd[1945]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 24 00:07:06.286165 ntpd[1945]: corporation. Support and training for ntp-4 are Nov 24 00:07:06.286174 ntpd[1945]: available at https://www.nwtime.org/support Nov 24 00:07:06.286185 ntpd[1945]: ---------------------------------------------------- Nov 24 00:07:06.293093 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 24 00:07:06.296923 coreos-metadata[1938]: Nov 24 00:07:06.291 INFO Fetch successful Nov 24 00:07:06.296923 coreos-metadata[1938]: Nov 24 00:07:06.291 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 24 00:07:06.293153 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 24 00:07:06.293884 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 24 00:07:06.295112 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 24 00:07:06.297969 ntpd[1945]: proto: precision = 0.076 usec (-24) Nov 24 00:07:06.298239 ntpd[1945]: 24 Nov 00:07:06 ntpd[1945]: proto: precision = 0.076 usec (-24) Nov 24 00:07:06.298170 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 24 00:07:06.303064 coreos-metadata[1938]: Nov 24 00:07:06.301 INFO Fetch successful Nov 24 00:07:06.306314 ntpd[1945]: basedate set to 2025-11-11 Nov 24 00:07:06.306923 ntpd[1945]: 24 Nov 00:07:06 ntpd[1945]: basedate set to 2025-11-11 Nov 24 00:07:06.306923 ntpd[1945]: 24 Nov 00:07:06 ntpd[1945]: gps base set to 2025-11-16 (week 2393) Nov 24 00:07:06.306923 ntpd[1945]: 24 Nov 00:07:06 ntpd[1945]: Listen and drop on 0 v6wildcard [::]:123 Nov 24 00:07:06.306923 ntpd[1945]: 24 Nov 00:07:06 ntpd[1945]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 24 00:07:06.306345 ntpd[1945]: gps base set to 2025-11-16 (week 2393) Nov 24 00:07:06.306530 ntpd[1945]: Listen and drop on 0 v6wildcard [::]:123 Nov 24 00:07:06.306564 ntpd[1945]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 24 00:07:06.307243 ntpd[1945]: 24 Nov 00:07:06 ntpd[1945]: Listen normally on 2 lo 127.0.0.1:123 Nov 24 00:07:06.307243 ntpd[1945]: 24 Nov 00:07:06 ntpd[1945]: Listen normally on 3 eth0 172.31.16.87:123 Nov 24 00:07:06.307179 ntpd[1945]: Listen normally on 2 lo 127.0.0.1:123 Nov 24 00:07:06.307351 ntpd[1945]: 24 Nov 00:07:06 ntpd[1945]: Listen normally on 4 lo [::1]:123 Nov 24 00:07:06.307351 ntpd[1945]: 24 Nov 00:07:06 ntpd[1945]: bind(21) AF_INET6 [fe80::4e5:45ff:fec0:85d3%2]:123 flags 0x811 failed: Cannot assign requested address Nov 24 00:07:06.307351 ntpd[1945]: 24 Nov 00:07:06 ntpd[1945]: unable to create socket on eth0 (5) for [fe80::4e5:45ff:fec0:85d3%2]:123 Nov 24 00:07:06.307214 ntpd[1945]: Listen normally on 3 eth0 172.31.16.87:123 Nov 24 00:07:06.307248 ntpd[1945]: Listen normally on 4 lo [::1]:123 Nov 24 00:07:06.307284 ntpd[1945]: bind(21) AF_INET6 [fe80::4e5:45ff:fec0:85d3%2]:123 flags 0x811 failed: Cannot assign requested address Nov 24 00:07:06.307307 ntpd[1945]: unable to create socket on eth0 (5) for [fe80::4e5:45ff:fec0:85d3%2]:123 Nov 24 00:07:06.308359 kernel: ntpd[1945]: segfault at 24 ip 000055cc572b9aeb sp 00007ffcd42c46d0 error 4 in ntpd[68aeb,55cc57257000+80000] likely on CPU 1 (core 0, socket 0) Nov 24 00:07:06.312136 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Nov 24 00:07:06.338652 dbus-daemon[1939]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1838 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 24 00:07:06.341153 systemd[1]: Started update-engine.service - Update Engine. Nov 24 00:07:06.346920 update_engine[1961]: I20251124 00:07:06.342799 1961 update_check_scheduler.cc:74] Next update check in 2m39s Nov 24 00:07:06.350362 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 24 00:07:06.379135 systemd-coredump[2021]: Process 1945 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Nov 24 00:07:06.386203 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 24 00:07:06.393293 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Nov 24 00:07:06.402699 systemd[1]: Started systemd-coredump@0-2021-0.service - Process Core Dump (PID 2021/UID 0). Nov 24 00:07:06.451410 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 24 00:07:06.453649 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 24 00:07:06.511385 locksmithd[2016]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 24 00:07:06.590671 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Nov 24 00:07:06.566749 systemd-logind[1954]: Watching system buttons on /dev/input/event2 (Power Button) Nov 24 00:07:06.566775 systemd-logind[1954]: Watching system buttons on /dev/input/event3 (Sleep Button) Nov 24 00:07:06.566800 systemd-logind[1954]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 24 00:07:06.570359 systemd-logind[1954]: New seat seat0. Nov 24 00:07:06.573838 systemd[1]: Started systemd-logind.service - User Login Management. Nov 24 00:07:06.595220 extend-filesystems[1997]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 24 00:07:06.595220 extend-filesystems[1997]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 24 00:07:06.595220 extend-filesystems[1997]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Nov 24 00:07:06.634812 extend-filesystems[1942]: Resized filesystem in /dev/nvme0n1p9 Nov 24 00:07:06.660676 bash[2056]: Updated "/home/core/.ssh/authorized_keys" Nov 24 00:07:06.598805 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 24 00:07:06.599253 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 24 00:07:06.622338 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 24 00:07:06.662420 systemd[1]: Starting sshkeys.service... Nov 24 00:07:06.778624 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 24 00:07:06.782831 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 24 00:07:06.789679 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 24 00:07:07.012527 coreos-metadata[2132]: Nov 24 00:07:07.012 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 24 00:07:07.017059 coreos-metadata[2132]: Nov 24 00:07:07.015 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 24 00:07:07.026227 coreos-metadata[2132]: Nov 24 00:07:07.025 INFO Fetch successful Nov 24 00:07:07.026227 coreos-metadata[2132]: Nov 24 00:07:07.025 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 24 00:07:07.027380 coreos-metadata[2132]: Nov 24 00:07:07.027 INFO Fetch successful Nov 24 00:07:07.034659 unknown[2132]: wrote ssh authorized keys file for user: core Nov 24 00:07:07.059994 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 24 00:07:07.066886 dbus-daemon[1939]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 24 00:07:07.069470 dbus-daemon[1939]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2015 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 24 00:07:07.078321 systemd[1]: Starting polkit.service - Authorization Manager... Nov 24 00:07:07.117561 update-ssh-keys[2138]: Updated "/home/core/.ssh/authorized_keys" Nov 24 00:07:07.116159 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 24 00:07:07.121121 systemd[1]: Finished sshkeys.service. Nov 24 00:07:07.145859 sshd_keygen[1962]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 24 00:07:07.148302 systemd-coredump[2037]: Process 1945 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1945: #0 0x000055cc572b9aeb n/a (ntpd + 0x68aeb) #1 0x000055cc57262cdf n/a (ntpd + 0x11cdf) #2 0x000055cc57263575 n/a (ntpd + 0x12575) #3 0x000055cc5725ed8a n/a (ntpd + 0xdd8a) #4 0x000055cc572605d3 n/a (ntpd + 0xf5d3) #5 0x000055cc57268fd1 n/a (ntpd + 0x17fd1) #6 0x000055cc57259c2d n/a (ntpd + 0x8c2d) #7 0x00007f2cdbc2a16c n/a (libc.so.6 + 0x2716c) #8 0x00007f2cdbc2a229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055cc57259c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Nov 24 00:07:07.150887 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Nov 24 00:07:07.151102 systemd[1]: ntpd.service: Failed with result 'core-dump'. Nov 24 00:07:07.163795 systemd[1]: systemd-coredump@0-2021-0.service: Deactivated successfully. Nov 24 00:07:07.248630 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 24 00:07:07.258352 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 24 00:07:07.267185 systemd[1]: Started sshd@0-172.31.16.87:22-139.178.68.195:55340.service - OpenSSH per-connection server daemon (139.178.68.195:55340). Nov 24 00:07:07.271877 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Nov 24 00:07:07.276322 systemd[1]: Started ntpd.service - Network Time Service. Nov 24 00:07:07.284704 containerd[1972]: time="2025-11-24T00:07:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 24 00:07:07.287713 containerd[1972]: time="2025-11-24T00:07:07.287642876Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 24 00:07:07.335087 systemd[1]: issuegen.service: Deactivated successfully. Nov 24 00:07:07.336468 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 24 00:07:07.343520 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 24 00:07:07.375442 containerd[1972]: time="2025-11-24T00:07:07.371694156Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="30.458µs" Nov 24 00:07:07.375442 containerd[1972]: time="2025-11-24T00:07:07.371745900Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 24 00:07:07.375442 containerd[1972]: time="2025-11-24T00:07:07.371773094Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 24 00:07:07.375442 containerd[1972]: time="2025-11-24T00:07:07.371976594Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 24 00:07:07.375442 containerd[1972]: time="2025-11-24T00:07:07.371999813Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 24 00:07:07.375442 containerd[1972]: time="2025-11-24T00:07:07.372080815Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 00:07:07.375442 containerd[1972]: time="2025-11-24T00:07:07.372157979Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 00:07:07.375442 containerd[1972]: time="2025-11-24T00:07:07.372172755Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 00:07:07.375442 containerd[1972]: time="2025-11-24T00:07:07.372467367Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 00:07:07.375442 containerd[1972]: time="2025-11-24T00:07:07.372491193Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 00:07:07.375442 containerd[1972]: time="2025-11-24T00:07:07.372516994Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 00:07:07.375442 containerd[1972]: time="2025-11-24T00:07:07.372530398Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 24 00:07:07.376484 containerd[1972]: time="2025-11-24T00:07:07.372635893Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 24 00:07:07.376484 containerd[1972]: time="2025-11-24T00:07:07.372880436Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 00:07:07.376484 containerd[1972]: time="2025-11-24T00:07:07.372915879Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 00:07:07.376484 containerd[1972]: time="2025-11-24T00:07:07.372932496Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 24 00:07:07.376484 containerd[1972]: time="2025-11-24T00:07:07.372973571Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 24 00:07:07.376484 containerd[1972]: time="2025-11-24T00:07:07.373480291Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 24 00:07:07.376484 containerd[1972]: time="2025-11-24T00:07:07.373599241Z" level=info msg="metadata content store policy set" policy=shared Nov 24 00:07:07.380178 containerd[1972]: time="2025-11-24T00:07:07.379525262Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 24 00:07:07.380178 containerd[1972]: time="2025-11-24T00:07:07.379636002Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 24 00:07:07.380178 containerd[1972]: time="2025-11-24T00:07:07.379716971Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 24 00:07:07.380178 containerd[1972]: time="2025-11-24T00:07:07.379743439Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 24 00:07:07.380178 containerd[1972]: time="2025-11-24T00:07:07.379765382Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 24 00:07:07.380178 containerd[1972]: time="2025-11-24T00:07:07.379781318Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 24 00:07:07.380178 containerd[1972]: time="2025-11-24T00:07:07.379813283Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 24 00:07:07.380178 containerd[1972]: time="2025-11-24T00:07:07.379835845Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 24 00:07:07.380178 containerd[1972]: time="2025-11-24T00:07:07.379857001Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 24 00:07:07.380178 containerd[1972]: time="2025-11-24T00:07:07.379872959Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 24 00:07:07.380178 containerd[1972]: time="2025-11-24T00:07:07.379887667Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 24 00:07:07.380178 containerd[1972]: time="2025-11-24T00:07:07.379916774Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 24 00:07:07.380178 containerd[1972]: time="2025-11-24T00:07:07.380114360Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 24 00:07:07.380178 containerd[1972]: time="2025-11-24T00:07:07.380145603Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 24 00:07:07.381794 containerd[1972]: time="2025-11-24T00:07:07.380167077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 24 00:07:07.381794 containerd[1972]: time="2025-11-24T00:07:07.380193253Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 24 00:07:07.381794 containerd[1972]: time="2025-11-24T00:07:07.380210129Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 24 00:07:07.381794 containerd[1972]: time="2025-11-24T00:07:07.380223380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 24 00:07:07.381794 containerd[1972]: time="2025-11-24T00:07:07.380237555Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 24 00:07:07.381794 containerd[1972]: time="2025-11-24T00:07:07.380249552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 24 00:07:07.381794 containerd[1972]: time="2025-11-24T00:07:07.380265363Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 24 00:07:07.381794 containerd[1972]: time="2025-11-24T00:07:07.380282293Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 24 00:07:07.381794 containerd[1972]: time="2025-11-24T00:07:07.380295748Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 24 00:07:07.381336 systemd-networkd[1838]: eth0: Gained IPv6LL Nov 24 00:07:07.384503 containerd[1972]: time="2025-11-24T00:07:07.382867241Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 24 00:07:07.384503 containerd[1972]: time="2025-11-24T00:07:07.382910202Z" level=info msg="Start snapshots syncer" Nov 24 00:07:07.384503 containerd[1972]: time="2025-11-24T00:07:07.383791173Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 24 00:07:07.388916 containerd[1972]: time="2025-11-24T00:07:07.385836592Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 24 00:07:07.393068 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 24 00:07:07.395589 containerd[1972]: time="2025-11-24T00:07:07.394303728Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 24 00:07:07.395680 systemd[1]: Reached target network-online.target - Network is Online. Nov 24 00:07:07.402179 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 24 00:07:07.407391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:07:07.415464 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 24 00:07:07.418473 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 24 00:07:07.431918 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 24 00:07:07.436729 ntpd[2162]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:13:58 UTC 2025 (1): Starting Nov 24 00:07:07.436832 ntpd[2162]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 24 00:07:07.437236 ntpd[2162]: 24 Nov 00:07:07 ntpd[2162]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:13:58 UTC 2025 (1): Starting Nov 24 00:07:07.437236 ntpd[2162]: 24 Nov 00:07:07 ntpd[2162]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 24 00:07:07.437236 ntpd[2162]: 24 Nov 00:07:07 ntpd[2162]: ---------------------------------------------------- Nov 24 00:07:07.437236 ntpd[2162]: 24 Nov 00:07:07 ntpd[2162]: ntp-4 is maintained by Network Time Foundation, Nov 24 00:07:07.437236 ntpd[2162]: 24 Nov 00:07:07 ntpd[2162]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 24 00:07:07.437236 ntpd[2162]: 24 Nov 00:07:07 ntpd[2162]: corporation. Support and training for ntp-4 are Nov 24 00:07:07.437236 ntpd[2162]: 24 Nov 00:07:07 ntpd[2162]: available at https://www.nwtime.org/support Nov 24 00:07:07.437236 ntpd[2162]: 24 Nov 00:07:07 ntpd[2162]: ---------------------------------------------------- Nov 24 00:07:07.436844 ntpd[2162]: ---------------------------------------------------- Nov 24 00:07:07.436854 ntpd[2162]: ntp-4 is maintained by Network Time Foundation, Nov 24 00:07:07.436863 ntpd[2162]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 24 00:07:07.437971 containerd[1972]: time="2025-11-24T00:07:07.437828128Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 24 00:07:07.436872 ntpd[2162]: corporation. Support and training for ntp-4 are Nov 24 00:07:07.436881 ntpd[2162]: available at https://www.nwtime.org/support Nov 24 00:07:07.440228 containerd[1972]: time="2025-11-24T00:07:07.438164622Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 24 00:07:07.440228 containerd[1972]: time="2025-11-24T00:07:07.438350168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 24 00:07:07.440228 containerd[1972]: time="2025-11-24T00:07:07.438386398Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 24 00:07:07.440228 containerd[1972]: time="2025-11-24T00:07:07.438420496Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 24 00:07:07.440228 containerd[1972]: time="2025-11-24T00:07:07.438461166Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 24 00:07:07.440228 containerd[1972]: time="2025-11-24T00:07:07.438480056Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 24 00:07:07.440228 containerd[1972]: time="2025-11-24T00:07:07.438496972Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 24 00:07:07.440228 containerd[1972]: time="2025-11-24T00:07:07.438560525Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 24 00:07:07.440228 containerd[1972]: time="2025-11-24T00:07:07.438579865Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 24 00:07:07.440228 containerd[1972]: time="2025-11-24T00:07:07.438595332Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 24 00:07:07.440228 containerd[1972]: time="2025-11-24T00:07:07.438753953Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 00:07:07.440228 containerd[1972]: time="2025-11-24T00:07:07.438795733Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 00:07:07.440228 containerd[1972]: time="2025-11-24T00:07:07.438824029Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 00:07:07.436890 ntpd[2162]: ---------------------------------------------------- Nov 24 00:07:07.441236 ntpd[2162]: proto: precision = 0.066 usec (-24) Nov 24 00:07:07.442218 containerd[1972]: time="2025-11-24T00:07:07.441898918Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 00:07:07.442218 containerd[1972]: time="2025-11-24T00:07:07.441959666Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 24 00:07:07.442218 containerd[1972]: time="2025-11-24T00:07:07.442005617Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 24 00:07:07.442218 containerd[1972]: time="2025-11-24T00:07:07.442060408Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 24 00:07:07.442386 ntpd[2162]: 24 Nov 00:07:07 ntpd[2162]: proto: precision = 0.066 usec (-24) Nov 24 00:07:07.442386 ntpd[2162]: 24 Nov 00:07:07 ntpd[2162]: basedate set to 2025-11-11 Nov 24 00:07:07.442386 ntpd[2162]: 24 Nov 00:07:07 ntpd[2162]: gps base set to 2025-11-16 (week 2393) Nov 24 00:07:07.442386 ntpd[2162]: 24 Nov 00:07:07 ntpd[2162]: Listen and drop on 0 v6wildcard [::]:123 Nov 24 00:07:07.442386 ntpd[2162]: 24 Nov 00:07:07 ntpd[2162]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 24 00:07:07.441521 ntpd[2162]: basedate set to 2025-11-11 Nov 24 00:07:07.441536 ntpd[2162]: gps base set to 2025-11-16 (week 2393) Nov 24 00:07:07.441647 ntpd[2162]: Listen and drop on 0 v6wildcard [::]:123 Nov 24 00:07:07.441678 ntpd[2162]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 24 00:07:07.443485 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 24 00:07:07.444074 polkitd[2139]: Started polkitd version 126 Nov 24 00:07:07.444416 containerd[1972]: time="2025-11-24T00:07:07.442086585Z" level=info msg="runtime interface created" Nov 24 00:07:07.444416 containerd[1972]: time="2025-11-24T00:07:07.444373182Z" level=info msg="created NRI interface" Nov 24 00:07:07.445409 systemd[1]: Reached target getty.target - Login Prompts. Nov 24 00:07:07.449068 containerd[1972]: time="2025-11-24T00:07:07.446223618Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 24 00:07:07.452186 containerd[1972]: time="2025-11-24T00:07:07.448140501Z" level=info msg="Connect containerd service" Nov 24 00:07:07.452319 containerd[1972]: time="2025-11-24T00:07:07.452296142Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 24 00:07:07.454441 ntpd[2162]: Listen normally on 2 lo 127.0.0.1:123 Nov 24 00:07:07.456242 ntpd[2162]: 24 Nov 00:07:07 ntpd[2162]: Listen normally on 2 lo 127.0.0.1:123 Nov 24 00:07:07.456242 ntpd[2162]: 24 Nov 00:07:07 ntpd[2162]: Listen normally on 3 eth0 172.31.16.87:123 Nov 24 00:07:07.456242 ntpd[2162]: 24 Nov 00:07:07 ntpd[2162]: Listen normally on 4 lo [::1]:123 Nov 24 00:07:07.456242 ntpd[2162]: 24 Nov 00:07:07 ntpd[2162]: Listen normally on 5 eth0 [fe80::4e5:45ff:fec0:85d3%2]:123 Nov 24 00:07:07.456242 ntpd[2162]: 24 Nov 00:07:07 ntpd[2162]: Listening on routing socket on fd #22 for interface updates Nov 24 00:07:07.454486 ntpd[2162]: Listen normally on 3 eth0 172.31.16.87:123 Nov 24 00:07:07.454525 ntpd[2162]: Listen normally on 4 lo [::1]:123 Nov 24 00:07:07.454555 ntpd[2162]: Listen normally on 5 eth0 [fe80::4e5:45ff:fec0:85d3%2]:123 Nov 24 00:07:07.454588 ntpd[2162]: Listening on routing socket on fd #22 for interface updates Nov 24 00:07:07.469743 ntpd[2162]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 24 00:07:07.469798 ntpd[2162]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 24 00:07:07.469992 ntpd[2162]: 24 Nov 00:07:07 ntpd[2162]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 24 00:07:07.469992 ntpd[2162]: 24 Nov 00:07:07 ntpd[2162]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 24 00:07:07.474748 polkitd[2139]: Loading rules from directory /etc/polkit-1/rules.d Nov 24 00:07:07.475491 polkitd[2139]: Loading rules from directory /run/polkit-1/rules.d Nov 24 00:07:07.475695 polkitd[2139]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 24 00:07:07.477504 polkitd[2139]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 24 00:07:07.477565 polkitd[2139]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 24 00:07:07.477615 polkitd[2139]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 24 00:07:07.484052 polkitd[2139]: Finished loading, compiling and executing 2 rules Nov 24 00:07:07.484722 systemd[1]: Started polkit.service - Authorization Manager. Nov 24 00:07:07.488123 containerd[1972]: time="2025-11-24T00:07:07.486023548Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 24 00:07:07.489414 dbus-daemon[1939]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 24 00:07:07.490522 polkitd[2139]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 24 00:07:07.517672 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 24 00:07:07.575737 systemd-hostnamed[2015]: Hostname set to (transient) Nov 24 00:07:07.576397 systemd-resolved[1840]: System hostname changed to 'ip-172-31-16-87'. Nov 24 00:07:07.627052 sshd[2160]: Accepted publickey for core from 139.178.68.195 port 55340 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:07:07.631355 sshd-session[2160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:07:07.655435 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 24 00:07:07.660229 amazon-ssm-agent[2172]: Initializing new seelog logger Nov 24 00:07:07.660229 amazon-ssm-agent[2172]: New Seelog Logger Creation Complete Nov 24 00:07:07.660229 amazon-ssm-agent[2172]: 2025/11/24 00:07:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:07:07.660229 amazon-ssm-agent[2172]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:07:07.660229 amazon-ssm-agent[2172]: 2025/11/24 00:07:07 processing appconfig overrides Nov 24 00:07:07.660229 amazon-ssm-agent[2172]: 2025/11/24 00:07:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:07:07.660229 amazon-ssm-agent[2172]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:07:07.660229 amazon-ssm-agent[2172]: 2025/11/24 00:07:07 processing appconfig overrides Nov 24 00:07:07.660229 amazon-ssm-agent[2172]: 2025/11/24 00:07:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:07:07.660229 amazon-ssm-agent[2172]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:07:07.660229 amazon-ssm-agent[2172]: 2025/11/24 00:07:07 processing appconfig overrides Nov 24 00:07:07.660797 amazon-ssm-agent[2172]: 2025-11-24 00:07:07.6587 INFO Proxy environment variables: Nov 24 00:07:07.661137 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 24 00:07:07.665707 amazon-ssm-agent[2172]: 2025/11/24 00:07:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:07:07.665707 amazon-ssm-agent[2172]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:07:07.667108 amazon-ssm-agent[2172]: 2025/11/24 00:07:07 processing appconfig overrides Nov 24 00:07:07.711276 systemd-logind[1954]: New session 1 of user core. Nov 24 00:07:07.738252 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 24 00:07:07.746256 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 24 00:07:07.760777 amazon-ssm-agent[2172]: 2025-11-24 00:07:07.6593 INFO https_proxy: Nov 24 00:07:07.766571 (systemd)[2212]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 24 00:07:07.778554 systemd-logind[1954]: New session c1 of user core. Nov 24 00:07:07.859878 amazon-ssm-agent[2172]: 2025-11-24 00:07:07.6593 INFO http_proxy: Nov 24 00:07:07.880780 containerd[1972]: time="2025-11-24T00:07:07.880679137Z" level=info msg="Start subscribing containerd event" Nov 24 00:07:07.880780 containerd[1972]: time="2025-11-24T00:07:07.880775131Z" level=info msg="Start recovering state" Nov 24 00:07:07.880954 containerd[1972]: time="2025-11-24T00:07:07.880929102Z" level=info msg="Start event monitor" Nov 24 00:07:07.880954 containerd[1972]: time="2025-11-24T00:07:07.880948163Z" level=info msg="Start cni network conf syncer for default" Nov 24 00:07:07.881058 containerd[1972]: time="2025-11-24T00:07:07.880958325Z" level=info msg="Start streaming server" Nov 24 00:07:07.881058 containerd[1972]: time="2025-11-24T00:07:07.880985476Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 24 00:07:07.881058 containerd[1972]: time="2025-11-24T00:07:07.880997621Z" level=info msg="runtime interface starting up..." Nov 24 00:07:07.881058 containerd[1972]: time="2025-11-24T00:07:07.881006483Z" level=info msg="starting plugins..." Nov 24 00:07:07.881191 containerd[1972]: time="2025-11-24T00:07:07.881022893Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 24 00:07:07.883935 containerd[1972]: time="2025-11-24T00:07:07.883805298Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 24 00:07:07.884857 containerd[1972]: time="2025-11-24T00:07:07.884093085Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 24 00:07:07.884317 systemd[1]: Started containerd.service - containerd container runtime. Nov 24 00:07:07.887155 containerd[1972]: time="2025-11-24T00:07:07.885235800Z" level=info msg="containerd successfully booted in 0.606304s" Nov 24 00:07:07.959301 amazon-ssm-agent[2172]: 2025-11-24 00:07:07.6593 INFO no_proxy: Nov 24 00:07:07.975247 tar[1976]: linux-amd64/README.md Nov 24 00:07:08.020792 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 24 00:07:08.057661 amazon-ssm-agent[2172]: 2025-11-24 00:07:07.6595 INFO Checking if agent identity type OnPrem can be assumed Nov 24 00:07:08.117147 systemd[2212]: Queued start job for default target default.target. Nov 24 00:07:08.123512 systemd[2212]: Created slice app.slice - User Application Slice. Nov 24 00:07:08.123565 systemd[2212]: Reached target paths.target - Paths. Nov 24 00:07:08.123623 systemd[2212]: Reached target timers.target - Timers. Nov 24 00:07:08.125980 systemd[2212]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 24 00:07:08.153115 systemd[2212]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 24 00:07:08.153277 systemd[2212]: Reached target sockets.target - Sockets. Nov 24 00:07:08.153424 systemd[2212]: Reached target basic.target - Basic System. Nov 24 00:07:08.153478 systemd[2212]: Reached target default.target - Main User Target. Nov 24 00:07:08.153527 systemd[2212]: Startup finished in 349ms. Nov 24 00:07:08.153616 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 24 00:07:08.156949 amazon-ssm-agent[2172]: 2025-11-24 00:07:07.6597 INFO Checking if agent identity type EC2 can be assumed Nov 24 00:07:08.161987 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 24 00:07:08.256220 amazon-ssm-agent[2172]: 2025-11-24 00:07:07.8169 INFO Agent will take identity from EC2 Nov 24 00:07:08.321767 systemd[1]: Started sshd@1-172.31.16.87:22-139.178.68.195:55346.service - OpenSSH per-connection server daemon (139.178.68.195:55346). Nov 24 00:07:08.357080 amazon-ssm-agent[2172]: 2025-11-24 00:07:07.8262 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Nov 24 00:07:08.455937 amazon-ssm-agent[2172]: 2025-11-24 00:07:07.8263 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Nov 24 00:07:08.473441 amazon-ssm-agent[2172]: 2025/11/24 00:07:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:07:08.473441 amazon-ssm-agent[2172]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:07:08.473940 amazon-ssm-agent[2172]: 2025/11/24 00:07:08 processing appconfig overrides Nov 24 00:07:08.506852 amazon-ssm-agent[2172]: 2025-11-24 00:07:07.8263 INFO [amazon-ssm-agent] Starting Core Agent Nov 24 00:07:08.506852 amazon-ssm-agent[2172]: 2025-11-24 00:07:07.8263 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Nov 24 00:07:08.507066 amazon-ssm-agent[2172]: 2025-11-24 00:07:07.8263 INFO [Registrar] Starting registrar module Nov 24 00:07:08.507066 amazon-ssm-agent[2172]: 2025-11-24 00:07:07.8305 INFO [EC2Identity] Checking disk for registration info Nov 24 00:07:08.507066 amazon-ssm-agent[2172]: 2025-11-24 00:07:07.8306 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Nov 24 00:07:08.507066 amazon-ssm-agent[2172]: 2025-11-24 00:07:07.8306 INFO [EC2Identity] Generating registration keypair Nov 24 00:07:08.507066 amazon-ssm-agent[2172]: 2025-11-24 00:07:08.4161 INFO [EC2Identity] Checking write access before registering Nov 24 00:07:08.507066 amazon-ssm-agent[2172]: 2025-11-24 00:07:08.4166 INFO [EC2Identity] Registering EC2 instance with Systems Manager Nov 24 00:07:08.507066 amazon-ssm-agent[2172]: 2025-11-24 00:07:08.4723 INFO [EC2Identity] EC2 registration was successful. Nov 24 00:07:08.507066 amazon-ssm-agent[2172]: 2025-11-24 00:07:08.4724 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Nov 24 00:07:08.507066 amazon-ssm-agent[2172]: 2025-11-24 00:07:08.4726 INFO [CredentialRefresher] credentialRefresher has started Nov 24 00:07:08.507066 amazon-ssm-agent[2172]: 2025-11-24 00:07:08.4726 INFO [CredentialRefresher] Starting credentials refresher loop Nov 24 00:07:08.507066 amazon-ssm-agent[2172]: 2025-11-24 00:07:08.5062 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 24 00:07:08.507066 amazon-ssm-agent[2172]: 2025-11-24 00:07:08.5067 INFO [CredentialRefresher] Credentials ready Nov 24 00:07:08.535887 sshd[2231]: Accepted publickey for core from 139.178.68.195 port 55346 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:07:08.538628 sshd-session[2231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:07:08.546890 systemd-logind[1954]: New session 2 of user core. Nov 24 00:07:08.556074 amazon-ssm-agent[2172]: 2025-11-24 00:07:08.5071 INFO [CredentialRefresher] Next credential rotation will be in 29.999985556816668 minutes Nov 24 00:07:08.564415 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 24 00:07:08.716593 sshd[2235]: Connection closed by 139.178.68.195 port 55346 Nov 24 00:07:08.716800 sshd-session[2231]: pam_unix(sshd:session): session closed for user core Nov 24 00:07:08.756621 systemd[1]: sshd@1-172.31.16.87:22-139.178.68.195:55346.service: Deactivated successfully. Nov 24 00:07:08.761558 systemd[1]: session-2.scope: Deactivated successfully. Nov 24 00:07:08.764131 systemd-logind[1954]: Session 2 logged out. Waiting for processes to exit. Nov 24 00:07:08.771184 systemd[1]: Started sshd@2-172.31.16.87:22-139.178.68.195:55352.service - OpenSSH per-connection server daemon (139.178.68.195:55352). Nov 24 00:07:08.781723 systemd-logind[1954]: Removed session 2. Nov 24 00:07:09.009413 sshd[2241]: Accepted publickey for core from 139.178.68.195 port 55352 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:07:09.014368 sshd-session[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:07:09.022255 systemd-logind[1954]: New session 3 of user core. Nov 24 00:07:09.033958 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 24 00:07:09.164685 sshd[2244]: Connection closed by 139.178.68.195 port 55352 Nov 24 00:07:09.165509 sshd-session[2241]: pam_unix(sshd:session): session closed for user core Nov 24 00:07:09.172665 systemd[1]: sshd@2-172.31.16.87:22-139.178.68.195:55352.service: Deactivated successfully. Nov 24 00:07:09.175667 systemd[1]: session-3.scope: Deactivated successfully. Nov 24 00:07:09.177934 systemd-logind[1954]: Session 3 logged out. Waiting for processes to exit. Nov 24 00:07:09.180183 systemd-logind[1954]: Removed session 3. Nov 24 00:07:09.525677 amazon-ssm-agent[2172]: 2025-11-24 00:07:09.5255 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 24 00:07:09.626451 amazon-ssm-agent[2172]: 2025-11-24 00:07:09.5333 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2251) started Nov 24 00:07:09.650953 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:07:09.659977 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 24 00:07:09.666632 systemd[1]: Startup finished in 2.765s (kernel) + 8.032s (initrd) + 10.051s (userspace) = 20.849s. Nov 24 00:07:09.680594 (kubelet)[2262]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:07:09.730421 amazon-ssm-agent[2172]: 2025-11-24 00:07:09.5333 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 24 00:07:10.657791 kubelet[2262]: E1124 00:07:10.657628 2262 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:07:10.661503 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:07:10.661709 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:07:10.662189 systemd[1]: kubelet.service: Consumed 1.156s CPU time, 268.1M memory peak. Nov 24 00:07:15.238099 systemd-resolved[1840]: Clock change detected. Flushing caches. Nov 24 00:07:20.004679 systemd[1]: Started sshd@3-172.31.16.87:22-139.178.68.195:33782.service - OpenSSH per-connection server daemon (139.178.68.195:33782). Nov 24 00:07:20.190748 sshd[2279]: Accepted publickey for core from 139.178.68.195 port 33782 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:07:20.192444 sshd-session[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:07:20.198642 systemd-logind[1954]: New session 4 of user core. Nov 24 00:07:20.206931 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 24 00:07:20.327528 sshd[2282]: Connection closed by 139.178.68.195 port 33782 Nov 24 00:07:20.327786 sshd-session[2279]: pam_unix(sshd:session): session closed for user core Nov 24 00:07:20.332947 systemd-logind[1954]: Session 4 logged out. Waiting for processes to exit. Nov 24 00:07:20.333707 systemd[1]: sshd@3-172.31.16.87:22-139.178.68.195:33782.service: Deactivated successfully. Nov 24 00:07:20.336670 systemd[1]: session-4.scope: Deactivated successfully. Nov 24 00:07:20.338552 systemd-logind[1954]: Removed session 4. Nov 24 00:07:20.363092 systemd[1]: Started sshd@4-172.31.16.87:22-139.178.68.195:48838.service - OpenSSH per-connection server daemon (139.178.68.195:48838). Nov 24 00:07:20.561025 sshd[2288]: Accepted publickey for core from 139.178.68.195 port 48838 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:07:20.562529 sshd-session[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:07:20.569384 systemd-logind[1954]: New session 5 of user core. Nov 24 00:07:20.575837 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 24 00:07:20.695686 sshd[2291]: Connection closed by 139.178.68.195 port 48838 Nov 24 00:07:20.696805 sshd-session[2288]: pam_unix(sshd:session): session closed for user core Nov 24 00:07:20.701127 systemd[1]: sshd@4-172.31.16.87:22-139.178.68.195:48838.service: Deactivated successfully. Nov 24 00:07:20.702939 systemd[1]: session-5.scope: Deactivated successfully. Nov 24 00:07:20.703764 systemd-logind[1954]: Session 5 logged out. Waiting for processes to exit. Nov 24 00:07:20.705481 systemd-logind[1954]: Removed session 5. Nov 24 00:07:20.727802 systemd[1]: Started sshd@5-172.31.16.87:22-139.178.68.195:48852.service - OpenSSH per-connection server daemon (139.178.68.195:48852). Nov 24 00:07:20.908623 sshd[2297]: Accepted publickey for core from 139.178.68.195 port 48852 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:07:20.910221 sshd-session[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:07:20.916809 systemd-logind[1954]: New session 6 of user core. Nov 24 00:07:20.923843 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 24 00:07:21.046529 sshd[2300]: Connection closed by 139.178.68.195 port 48852 Nov 24 00:07:21.047672 sshd-session[2297]: pam_unix(sshd:session): session closed for user core Nov 24 00:07:21.053136 systemd[1]: sshd@5-172.31.16.87:22-139.178.68.195:48852.service: Deactivated successfully. Nov 24 00:07:21.055524 systemd[1]: session-6.scope: Deactivated successfully. Nov 24 00:07:21.056962 systemd-logind[1954]: Session 6 logged out. Waiting for processes to exit. Nov 24 00:07:21.058937 systemd-logind[1954]: Removed session 6. Nov 24 00:07:21.082315 systemd[1]: Started sshd@6-172.31.16.87:22-139.178.68.195:48864.service - OpenSSH per-connection server daemon (139.178.68.195:48864). Nov 24 00:07:21.269064 sshd[2306]: Accepted publickey for core from 139.178.68.195 port 48864 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:07:21.270640 sshd-session[2306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:07:21.280637 systemd-logind[1954]: New session 7 of user core. Nov 24 00:07:21.289862 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 24 00:07:21.422889 sudo[2310]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 24 00:07:21.423179 sudo[2310]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:07:21.438217 sudo[2310]: pam_unix(sudo:session): session closed for user root Nov 24 00:07:21.462010 sshd[2309]: Connection closed by 139.178.68.195 port 48864 Nov 24 00:07:21.463121 sshd-session[2306]: pam_unix(sshd:session): session closed for user core Nov 24 00:07:21.469102 systemd[1]: sshd@6-172.31.16.87:22-139.178.68.195:48864.service: Deactivated successfully. Nov 24 00:07:21.471350 systemd[1]: session-7.scope: Deactivated successfully. Nov 24 00:07:21.473015 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 24 00:07:21.474250 systemd-logind[1954]: Session 7 logged out. Waiting for processes to exit. Nov 24 00:07:21.476973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:07:21.478616 systemd-logind[1954]: Removed session 7. Nov 24 00:07:21.495663 systemd[1]: Started sshd@7-172.31.16.87:22-139.178.68.195:48866.service - OpenSSH per-connection server daemon (139.178.68.195:48866). Nov 24 00:07:21.667606 sshd[2319]: Accepted publickey for core from 139.178.68.195 port 48866 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:07:21.670025 sshd-session[2319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:07:21.677734 systemd-logind[1954]: New session 8 of user core. Nov 24 00:07:21.682847 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 24 00:07:21.729569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:07:21.740264 (kubelet)[2328]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:07:21.790660 sudo[2335]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 24 00:07:21.791123 sudo[2335]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:07:21.800801 sudo[2335]: pam_unix(sudo:session): session closed for user root Nov 24 00:07:21.807597 kubelet[2328]: E1124 00:07:21.806823 2328 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:07:21.809178 sudo[2334]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 24 00:07:21.810048 sudo[2334]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:07:21.813772 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:07:21.813978 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:07:21.815526 systemd[1]: kubelet.service: Consumed 204ms CPU time, 109.3M memory peak. Nov 24 00:07:21.824943 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 00:07:21.866744 augenrules[2358]: No rules Nov 24 00:07:21.868614 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 00:07:21.868908 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 00:07:21.870189 sudo[2334]: pam_unix(sudo:session): session closed for user root Nov 24 00:07:21.894941 sshd[2323]: Connection closed by 139.178.68.195 port 48866 Nov 24 00:07:21.895509 sshd-session[2319]: pam_unix(sshd:session): session closed for user core Nov 24 00:07:21.900047 systemd[1]: sshd@7-172.31.16.87:22-139.178.68.195:48866.service: Deactivated successfully. Nov 24 00:07:21.902413 systemd[1]: session-8.scope: Deactivated successfully. Nov 24 00:07:21.903812 systemd-logind[1954]: Session 8 logged out. Waiting for processes to exit. Nov 24 00:07:21.905764 systemd-logind[1954]: Removed session 8. Nov 24 00:07:21.938909 systemd[1]: Started sshd@8-172.31.16.87:22-139.178.68.195:48882.service - OpenSSH per-connection server daemon (139.178.68.195:48882). Nov 24 00:07:22.124410 sshd[2367]: Accepted publickey for core from 139.178.68.195 port 48882 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:07:22.126056 sshd-session[2367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:07:22.132635 systemd-logind[1954]: New session 9 of user core. Nov 24 00:07:22.141875 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 24 00:07:22.239633 sudo[2371]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 24 00:07:22.239917 sudo[2371]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:07:22.959284 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 24 00:07:22.981177 (dockerd)[2389]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 24 00:07:23.480495 dockerd[2389]: time="2025-11-24T00:07:23.480413965Z" level=info msg="Starting up" Nov 24 00:07:23.481491 dockerd[2389]: time="2025-11-24T00:07:23.481417186Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 24 00:07:23.494619 dockerd[2389]: time="2025-11-24T00:07:23.494520285Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 24 00:07:23.569624 dockerd[2389]: time="2025-11-24T00:07:23.569373221Z" level=info msg="Loading containers: start." Nov 24 00:07:23.582769 kernel: Initializing XFRM netlink socket Nov 24 00:07:23.860748 (udev-worker)[2410]: Network interface NamePolicy= disabled on kernel command line. Nov 24 00:07:23.912736 systemd-networkd[1838]: docker0: Link UP Nov 24 00:07:23.927657 dockerd[2389]: time="2025-11-24T00:07:23.926966838Z" level=info msg="Loading containers: done." Nov 24 00:07:23.947288 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2191261574-merged.mount: Deactivated successfully. Nov 24 00:07:23.956687 dockerd[2389]: time="2025-11-24T00:07:23.956615932Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 24 00:07:23.956939 dockerd[2389]: time="2025-11-24T00:07:23.956762485Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 24 00:07:23.956939 dockerd[2389]: time="2025-11-24T00:07:23.956917418Z" level=info msg="Initializing buildkit" Nov 24 00:07:24.007359 dockerd[2389]: time="2025-11-24T00:07:24.007308092Z" level=info msg="Completed buildkit initialization" Nov 24 00:07:24.025470 dockerd[2389]: time="2025-11-24T00:07:24.025408546Z" level=info msg="Daemon has completed initialization" Nov 24 00:07:24.025821 dockerd[2389]: time="2025-11-24T00:07:24.025779006Z" level=info msg="API listen on /run/docker.sock" Nov 24 00:07:24.025872 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 24 00:07:25.323031 containerd[1972]: time="2025-11-24T00:07:25.322962849Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\"" Nov 24 00:07:25.975078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1914453573.mount: Deactivated successfully. Nov 24 00:07:27.729179 containerd[1972]: time="2025-11-24T00:07:27.729104736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:07:27.730722 containerd[1972]: time="2025-11-24T00:07:27.730477018Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.6: active requests=0, bytes read=30113213" Nov 24 00:07:27.731766 containerd[1972]: time="2025-11-24T00:07:27.731722153Z" level=info msg="ImageCreate event name:\"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:07:27.735063 containerd[1972]: time="2025-11-24T00:07:27.734994822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:07:27.737594 containerd[1972]: time="2025-11-24T00:07:27.737520441Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.6\" with image id \"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\", size \"30109812\" in 2.414505659s" Nov 24 00:07:27.738617 containerd[1972]: time="2025-11-24T00:07:27.737798785Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\" returns image reference \"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\"" Nov 24 00:07:27.740059 containerd[1972]: time="2025-11-24T00:07:27.740019230Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\"" Nov 24 00:07:29.585342 containerd[1972]: time="2025-11-24T00:07:29.585275641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:07:29.587639 containerd[1972]: time="2025-11-24T00:07:29.587588299Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.6: active requests=0, bytes read=26018107" Nov 24 00:07:29.591120 containerd[1972]: time="2025-11-24T00:07:29.590588936Z" level=info msg="ImageCreate event name:\"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:07:29.595121 containerd[1972]: time="2025-11-24T00:07:29.595079112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:07:29.595854 containerd[1972]: time="2025-11-24T00:07:29.595815577Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.6\" with image id \"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\", size \"27675143\" in 1.855739478s" Nov 24 00:07:29.595854 containerd[1972]: time="2025-11-24T00:07:29.595854868Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\" returns image reference \"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\"" Nov 24 00:07:29.596687 containerd[1972]: time="2025-11-24T00:07:29.596532991Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\"" Nov 24 00:07:31.125180 containerd[1972]: time="2025-11-24T00:07:31.125123255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:07:31.127941 containerd[1972]: time="2025-11-24T00:07:31.127768553Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.6: active requests=0, bytes read=20156482" Nov 24 00:07:31.130657 containerd[1972]: time="2025-11-24T00:07:31.130608476Z" level=info msg="ImageCreate event name:\"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:07:31.135535 containerd[1972]: time="2025-11-24T00:07:31.135450365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:07:31.137180 containerd[1972]: time="2025-11-24T00:07:31.136990112Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.6\" with image id \"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\", size \"21813536\" in 1.54042319s" Nov 24 00:07:31.137180 containerd[1972]: time="2025-11-24T00:07:31.137045693Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\" returns image reference \"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\"" Nov 24 00:07:31.138142 containerd[1972]: time="2025-11-24T00:07:31.138107117Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\"" Nov 24 00:07:32.065188 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 24 00:07:32.069852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:07:32.257116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount408248886.mount: Deactivated successfully. Nov 24 00:07:32.392517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:07:32.406700 (kubelet)[2682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:07:32.483807 kubelet[2682]: E1124 00:07:32.483621 2682 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:07:32.486366 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:07:32.486606 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:07:32.487035 systemd[1]: kubelet.service: Consumed 239ms CPU time, 108.5M memory peak. Nov 24 00:07:33.027447 containerd[1972]: time="2025-11-24T00:07:33.027360115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:07:33.029575 containerd[1972]: time="2025-11-24T00:07:33.029496267Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.6: active requests=0, bytes read=31929138" Nov 24 00:07:33.032754 containerd[1972]: time="2025-11-24T00:07:33.032681331Z" level=info msg="ImageCreate event name:\"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:07:33.035722 containerd[1972]: time="2025-11-24T00:07:33.035645733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:07:33.036745 containerd[1972]: time="2025-11-24T00:07:33.036551352Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.6\" with image id \"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\", size \"31928157\" in 1.898407578s" Nov 24 00:07:33.036745 containerd[1972]: time="2025-11-24T00:07:33.036608170Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\" returns image reference \"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\"" Nov 24 00:07:33.037398 containerd[1972]: time="2025-11-24T00:07:33.037350045Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 24 00:07:33.652925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1047469489.mount: Deactivated successfully. Nov 24 00:07:34.837468 containerd[1972]: time="2025-11-24T00:07:34.837393506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:07:34.840714 containerd[1972]: time="2025-11-24T00:07:34.840404544Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 24 00:07:34.844442 containerd[1972]: time="2025-11-24T00:07:34.843105315Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:07:34.849202 containerd[1972]: time="2025-11-24T00:07:34.849136392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:07:34.850646 containerd[1972]: time="2025-11-24T00:07:34.850597508Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.813198485s" Nov 24 00:07:34.850838 containerd[1972]: time="2025-11-24T00:07:34.850815659Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 24 00:07:34.851687 containerd[1972]: time="2025-11-24T00:07:34.851655327Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 24 00:07:35.339184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3496132966.mount: Deactivated successfully. Nov 24 00:07:35.353911 containerd[1972]: time="2025-11-24T00:07:35.353843193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:07:35.355964 containerd[1972]: time="2025-11-24T00:07:35.355906676Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 24 00:07:35.358484 containerd[1972]: time="2025-11-24T00:07:35.358432688Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:07:35.362587 containerd[1972]: time="2025-11-24T00:07:35.361704794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:07:35.362587 containerd[1972]: time="2025-11-24T00:07:35.362448072Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 510.753511ms" Nov 24 00:07:35.362587 containerd[1972]: time="2025-11-24T00:07:35.362489704Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 24 00:07:35.363576 containerd[1972]: time="2025-11-24T00:07:35.363519882Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 24 00:07:35.941484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2906967844.mount: Deactivated successfully. Nov 24 00:07:38.383712 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 24 00:07:38.583126 containerd[1972]: time="2025-11-24T00:07:38.583057813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:07:38.590820 containerd[1972]: time="2025-11-24T00:07:38.590750685Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Nov 24 00:07:38.593766 containerd[1972]: time="2025-11-24T00:07:38.593679123Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:07:38.610717 containerd[1972]: time="2025-11-24T00:07:38.609954269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:07:38.614978 containerd[1972]: time="2025-11-24T00:07:38.611136635Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.247577166s" Nov 24 00:07:38.614978 containerd[1972]: time="2025-11-24T00:07:38.611182668Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 24 00:07:42.171206 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:07:42.171609 systemd[1]: kubelet.service: Consumed 239ms CPU time, 108.5M memory peak. Nov 24 00:07:42.174647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:07:42.214190 systemd[1]: Reload requested from client PID 2831 ('systemctl') (unit session-9.scope)... Nov 24 00:07:42.214402 systemd[1]: Reloading... Nov 24 00:07:42.387637 zram_generator::config[2887]: No configuration found. Nov 24 00:07:42.653599 systemd[1]: Reloading finished in 438 ms. Nov 24 00:07:42.715281 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 24 00:07:42.715397 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 24 00:07:42.715910 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:07:42.715982 systemd[1]: kubelet.service: Consumed 156ms CPU time, 98.2M memory peak. Nov 24 00:07:42.718178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:07:43.015899 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:07:43.027146 (kubelet)[2938]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 00:07:43.083147 kubelet[2938]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:07:43.083884 kubelet[2938]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 00:07:43.083884 kubelet[2938]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:07:43.091247 kubelet[2938]: I1124 00:07:43.091042 2938 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 00:07:43.468091 kubelet[2938]: I1124 00:07:43.467885 2938 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 24 00:07:43.468091 kubelet[2938]: I1124 00:07:43.467919 2938 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 00:07:43.468739 kubelet[2938]: I1124 00:07:43.468703 2938 server.go:956] "Client rotation is on, will bootstrap in background" Nov 24 00:07:43.527722 kubelet[2938]: E1124 00:07:43.527484 2938 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.87:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.87:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 24 00:07:43.528303 kubelet[2938]: I1124 00:07:43.528278 2938 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 00:07:43.555584 kubelet[2938]: I1124 00:07:43.555540 2938 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 00:07:43.570145 kubelet[2938]: I1124 00:07:43.570084 2938 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 00:07:43.579012 kubelet[2938]: I1124 00:07:43.578907 2938 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 00:07:43.584739 kubelet[2938]: I1124 00:07:43.579000 2938 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-87","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 00:07:43.584739 kubelet[2938]: I1124 00:07:43.584735 2938 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 00:07:43.584739 kubelet[2938]: I1124 00:07:43.584759 2938 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 00:07:43.586110 kubelet[2938]: I1124 00:07:43.586053 2938 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:07:43.589556 kubelet[2938]: I1124 00:07:43.589488 2938 kubelet.go:480] "Attempting to sync node with API server" Nov 24 00:07:43.589556 kubelet[2938]: I1124 00:07:43.589522 2938 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 00:07:43.589556 kubelet[2938]: I1124 00:07:43.589549 2938 kubelet.go:386] "Adding apiserver pod source" Nov 24 00:07:43.589556 kubelet[2938]: I1124 00:07:43.589578 2938 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 00:07:43.609426 kubelet[2938]: E1124 00:07:43.609368 2938 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.87:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 24 00:07:43.609633 kubelet[2938]: I1124 00:07:43.609532 2938 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 00:07:43.610463 kubelet[2938]: I1124 00:07:43.610395 2938 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 24 00:07:43.610868 kubelet[2938]: E1124 00:07:43.610828 2938 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-87&limit=500&resourceVersion=0\": dial tcp 172.31.16.87:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 24 00:07:43.610868 kubelet[2938]: W1124 00:07:43.611336 2938 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 24 00:07:43.617264 kubelet[2938]: I1124 00:07:43.617218 2938 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 00:07:43.617405 kubelet[2938]: I1124 00:07:43.617324 2938 server.go:1289] "Started kubelet" Nov 24 00:07:43.619207 kubelet[2938]: I1124 00:07:43.619148 2938 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 00:07:43.622709 kubelet[2938]: I1124 00:07:43.622669 2938 server.go:317] "Adding debug handlers to kubelet server" Nov 24 00:07:43.628104 kubelet[2938]: I1124 00:07:43.627889 2938 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 00:07:43.628509 kubelet[2938]: I1124 00:07:43.628478 2938 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 00:07:43.633462 kubelet[2938]: E1124 00:07:43.628744 2938 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.87:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.87:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-87.187ac8b04dfdeb42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-87,UID:ip-172-31-16-87,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-87,},FirstTimestamp:2025-11-24 00:07:43.617256258 +0000 UTC m=+0.585464015,LastTimestamp:2025-11-24 00:07:43.617256258 +0000 UTC m=+0.585464015,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-87,}" Nov 24 00:07:43.639132 kubelet[2938]: I1124 00:07:43.638980 2938 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 00:07:43.639290 kubelet[2938]: I1124 00:07:43.639170 2938 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 00:07:43.641594 kubelet[2938]: I1124 00:07:43.640948 2938 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 00:07:43.641594 kubelet[2938]: E1124 00:07:43.641316 2938 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-87\" not found" Nov 24 00:07:43.642721 kubelet[2938]: I1124 00:07:43.642700 2938 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 00:07:43.644210 kubelet[2938]: I1124 00:07:43.644185 2938 reconciler.go:26] "Reconciler: start to sync state" Nov 24 00:07:43.648088 kubelet[2938]: E1124 00:07:43.648045 2938 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.87:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 24 00:07:43.648234 kubelet[2938]: E1124 00:07:43.648187 2938 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-87?timeout=10s\": dial tcp 172.31.16.87:6443: connect: connection refused" interval="200ms" Nov 24 00:07:43.656300 kubelet[2938]: I1124 00:07:43.656263 2938 factory.go:223] Registration of the containerd container factory successfully Nov 24 00:07:43.657591 kubelet[2938]: I1124 00:07:43.656486 2938 factory.go:223] Registration of the systemd container factory successfully Nov 24 00:07:43.657591 kubelet[2938]: I1124 00:07:43.656640 2938 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 00:07:43.659777 kubelet[2938]: E1124 00:07:43.659745 2938 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 00:07:43.681430 kubelet[2938]: I1124 00:07:43.681192 2938 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 24 00:07:43.684850 kubelet[2938]: I1124 00:07:43.684812 2938 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 24 00:07:43.685888 kubelet[2938]: I1124 00:07:43.685464 2938 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 24 00:07:43.685888 kubelet[2938]: I1124 00:07:43.685503 2938 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 00:07:43.685888 kubelet[2938]: I1124 00:07:43.685516 2938 kubelet.go:2436] "Starting kubelet main sync loop" Nov 24 00:07:43.685888 kubelet[2938]: E1124 00:07:43.685585 2938 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 00:07:43.688735 kubelet[2938]: E1124 00:07:43.688694 2938 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.87:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 24 00:07:43.702887 kubelet[2938]: I1124 00:07:43.702817 2938 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 00:07:43.702887 kubelet[2938]: I1124 00:07:43.702876 2938 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 00:07:43.702887 kubelet[2938]: I1124 00:07:43.702899 2938 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:07:43.707798 kubelet[2938]: I1124 00:07:43.707731 2938 policy_none.go:49] "None policy: Start" Nov 24 00:07:43.707798 kubelet[2938]: I1124 00:07:43.707770 2938 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 00:07:43.707798 kubelet[2938]: I1124 00:07:43.707782 2938 state_mem.go:35] "Initializing new in-memory state store" Nov 24 00:07:43.720523 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 24 00:07:43.741622 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 24 00:07:43.742223 kubelet[2938]: E1124 00:07:43.742142 2938 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-87\" not found" Nov 24 00:07:43.748197 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 24 00:07:43.755953 kubelet[2938]: E1124 00:07:43.755906 2938 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 24 00:07:43.756442 kubelet[2938]: I1124 00:07:43.756420 2938 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 00:07:43.756527 kubelet[2938]: I1124 00:07:43.756442 2938 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 00:07:43.759593 kubelet[2938]: I1124 00:07:43.759520 2938 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 00:07:43.762441 kubelet[2938]: E1124 00:07:43.762336 2938 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 00:07:43.762441 kubelet[2938]: E1124 00:07:43.762396 2938 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-87\" not found" Nov 24 00:07:43.803133 systemd[1]: Created slice kubepods-burstable-pod924af24797d9d85eee00432aa4a8f9ab.slice - libcontainer container kubepods-burstable-pod924af24797d9d85eee00432aa4a8f9ab.slice. Nov 24 00:07:43.817913 kubelet[2938]: E1124 00:07:43.817863 2938 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-87\" not found" node="ip-172-31-16-87" Nov 24 00:07:43.823055 systemd[1]: Created slice kubepods-burstable-podd1368c83d8d7188e4557cd2f81016c96.slice - libcontainer container kubepods-burstable-podd1368c83d8d7188e4557cd2f81016c96.slice. Nov 24 00:07:43.835168 kubelet[2938]: E1124 00:07:43.835134 2938 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-87\" not found" node="ip-172-31-16-87" Nov 24 00:07:43.838532 systemd[1]: Created slice kubepods-burstable-pode63252b0ab9750cc79304959fc044f32.slice - libcontainer container kubepods-burstable-pode63252b0ab9750cc79304959fc044f32.slice. Nov 24 00:07:43.842763 kubelet[2938]: E1124 00:07:43.842731 2938 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-87\" not found" node="ip-172-31-16-87" Nov 24 00:07:43.844982 kubelet[2938]: I1124 00:07:43.844882 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1368c83d8d7188e4557cd2f81016c96-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-87\" (UID: \"d1368c83d8d7188e4557cd2f81016c96\") " pod="kube-system/kube-controller-manager-ip-172-31-16-87" Nov 24 00:07:43.844982 kubelet[2938]: I1124 00:07:43.844922 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1368c83d8d7188e4557cd2f81016c96-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-87\" (UID: \"d1368c83d8d7188e4557cd2f81016c96\") " pod="kube-system/kube-controller-manager-ip-172-31-16-87" Nov 24 00:07:43.844982 kubelet[2938]: I1124 00:07:43.844944 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/924af24797d9d85eee00432aa4a8f9ab-ca-certs\") pod \"kube-apiserver-ip-172-31-16-87\" (UID: \"924af24797d9d85eee00432aa4a8f9ab\") " pod="kube-system/kube-apiserver-ip-172-31-16-87" Nov 24 00:07:43.844982 kubelet[2938]: I1124 00:07:43.844964 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1368c83d8d7188e4557cd2f81016c96-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-87\" (UID: \"d1368c83d8d7188e4557cd2f81016c96\") " pod="kube-system/kube-controller-manager-ip-172-31-16-87" Nov 24 00:07:43.845199 kubelet[2938]: I1124 00:07:43.844995 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1368c83d8d7188e4557cd2f81016c96-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-87\" (UID: \"d1368c83d8d7188e4557cd2f81016c96\") " pod="kube-system/kube-controller-manager-ip-172-31-16-87" Nov 24 00:07:43.845199 kubelet[2938]: I1124 00:07:43.845022 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1368c83d8d7188e4557cd2f81016c96-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-87\" (UID: \"d1368c83d8d7188e4557cd2f81016c96\") " pod="kube-system/kube-controller-manager-ip-172-31-16-87" Nov 24 00:07:43.845199 kubelet[2938]: I1124 00:07:43.845049 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e63252b0ab9750cc79304959fc044f32-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-87\" (UID: \"e63252b0ab9750cc79304959fc044f32\") " pod="kube-system/kube-scheduler-ip-172-31-16-87" Nov 24 00:07:43.845199 kubelet[2938]: I1124 00:07:43.845070 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/924af24797d9d85eee00432aa4a8f9ab-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-87\" (UID: \"924af24797d9d85eee00432aa4a8f9ab\") " pod="kube-system/kube-apiserver-ip-172-31-16-87" Nov 24 00:07:43.845199 kubelet[2938]: I1124 00:07:43.845089 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/924af24797d9d85eee00432aa4a8f9ab-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-87\" (UID: \"924af24797d9d85eee00432aa4a8f9ab\") " pod="kube-system/kube-apiserver-ip-172-31-16-87" Nov 24 00:07:43.848836 kubelet[2938]: E1124 00:07:43.848792 2938 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-87?timeout=10s\": dial tcp 172.31.16.87:6443: connect: connection refused" interval="400ms" Nov 24 00:07:43.861059 kubelet[2938]: I1124 00:07:43.861012 2938 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-87" Nov 24 00:07:43.861553 kubelet[2938]: E1124 00:07:43.861516 2938 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.87:6443/api/v1/nodes\": dial tcp 172.31.16.87:6443: connect: connection refused" node="ip-172-31-16-87" Nov 24 00:07:44.064993 kubelet[2938]: I1124 00:07:44.064934 2938 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-87" Nov 24 00:07:44.065834 kubelet[2938]: E1124 00:07:44.065791 2938 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.87:6443/api/v1/nodes\": dial tcp 172.31.16.87:6443: connect: connection refused" node="ip-172-31-16-87" Nov 24 00:07:44.120366 containerd[1972]: time="2025-11-24T00:07:44.120307774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-87,Uid:924af24797d9d85eee00432aa4a8f9ab,Namespace:kube-system,Attempt:0,}" Nov 24 00:07:44.146443 containerd[1972]: time="2025-11-24T00:07:44.146375278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-87,Uid:d1368c83d8d7188e4557cd2f81016c96,Namespace:kube-system,Attempt:0,}" Nov 24 00:07:44.149161 containerd[1972]: time="2025-11-24T00:07:44.147070952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-87,Uid:e63252b0ab9750cc79304959fc044f32,Namespace:kube-system,Attempt:0,}" Nov 24 00:07:44.250782 kubelet[2938]: E1124 00:07:44.250366 2938 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-87?timeout=10s\": dial tcp 172.31.16.87:6443: connect: connection refused" interval="800ms" Nov 24 00:07:44.293819 containerd[1972]: time="2025-11-24T00:07:44.293768690Z" level=info msg="connecting to shim 8c8861d14e03edfb337691bd70a0b1a7cde359180bd4964b0f5a02050c535e69" address="unix:///run/containerd/s/51ef7e71328705b53e218e9b8b35520d189742055afab3632bc817a8033fd943" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:07:44.310206 containerd[1972]: time="2025-11-24T00:07:44.310100534Z" level=info msg="connecting to shim 178c879106c4208d062a1ebd8fa7da1feb981dc2609a257ebeb7c7ea3c568db4" address="unix:///run/containerd/s/588e2715898a6ef941babed710e3ac511110f270ded9ddf3dc2ddef1fea7123b" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:07:44.311593 containerd[1972]: time="2025-11-24T00:07:44.311526568Z" level=info msg="connecting to shim 244d918cab43ba513a4baeaef2303151ad6461f597ff692477dfb97fd50de166" address="unix:///run/containerd/s/c4f8ff64aa35e052980cb120e87ceacb5f34d70ae9d59b47929857dd03698eba" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:07:44.439112 systemd[1]: Started cri-containerd-8c8861d14e03edfb337691bd70a0b1a7cde359180bd4964b0f5a02050c535e69.scope - libcontainer container 8c8861d14e03edfb337691bd70a0b1a7cde359180bd4964b0f5a02050c535e69. Nov 24 00:07:44.451980 systemd[1]: Started cri-containerd-178c879106c4208d062a1ebd8fa7da1feb981dc2609a257ebeb7c7ea3c568db4.scope - libcontainer container 178c879106c4208d062a1ebd8fa7da1feb981dc2609a257ebeb7c7ea3c568db4. Nov 24 00:07:44.454654 systemd[1]: Started cri-containerd-244d918cab43ba513a4baeaef2303151ad6461f597ff692477dfb97fd50de166.scope - libcontainer container 244d918cab43ba513a4baeaef2303151ad6461f597ff692477dfb97fd50de166. Nov 24 00:07:44.470980 kubelet[2938]: I1124 00:07:44.470904 2938 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-87" Nov 24 00:07:44.471558 kubelet[2938]: E1124 00:07:44.471454 2938 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.87:6443/api/v1/nodes\": dial tcp 172.31.16.87:6443: connect: connection refused" node="ip-172-31-16-87" Nov 24 00:07:44.558351 containerd[1972]: time="2025-11-24T00:07:44.558215757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-87,Uid:e63252b0ab9750cc79304959fc044f32,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c8861d14e03edfb337691bd70a0b1a7cde359180bd4964b0f5a02050c535e69\"" Nov 24 00:07:44.568808 kubelet[2938]: E1124 00:07:44.568763 2938 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.87:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 24 00:07:44.570947 containerd[1972]: time="2025-11-24T00:07:44.569765430Z" level=info msg="CreateContainer within sandbox \"8c8861d14e03edfb337691bd70a0b1a7cde359180bd4964b0f5a02050c535e69\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 24 00:07:44.599865 containerd[1972]: time="2025-11-24T00:07:44.599815662Z" level=info msg="Container ab66731f06deea66838e04895f5c538bbc11459e33cbeb46624ccaf44bdd67cc: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:07:44.605860 containerd[1972]: time="2025-11-24T00:07:44.605013772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-87,Uid:d1368c83d8d7188e4557cd2f81016c96,Namespace:kube-system,Attempt:0,} returns sandbox id \"178c879106c4208d062a1ebd8fa7da1feb981dc2609a257ebeb7c7ea3c568db4\"" Nov 24 00:07:44.606330 containerd[1972]: time="2025-11-24T00:07:44.606291515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-87,Uid:924af24797d9d85eee00432aa4a8f9ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"244d918cab43ba513a4baeaef2303151ad6461f597ff692477dfb97fd50de166\"" Nov 24 00:07:44.616395 containerd[1972]: time="2025-11-24T00:07:44.616333415Z" level=info msg="CreateContainer within sandbox \"178c879106c4208d062a1ebd8fa7da1feb981dc2609a257ebeb7c7ea3c568db4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 24 00:07:44.621484 containerd[1972]: time="2025-11-24T00:07:44.621423477Z" level=info msg="CreateContainer within sandbox \"8c8861d14e03edfb337691bd70a0b1a7cde359180bd4964b0f5a02050c535e69\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ab66731f06deea66838e04895f5c538bbc11459e33cbeb46624ccaf44bdd67cc\"" Nov 24 00:07:44.622127 containerd[1972]: time="2025-11-24T00:07:44.621810244Z" level=info msg="CreateContainer within sandbox \"244d918cab43ba513a4baeaef2303151ad6461f597ff692477dfb97fd50de166\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 24 00:07:44.634345 containerd[1972]: time="2025-11-24T00:07:44.634307790Z" level=info msg="StartContainer for \"ab66731f06deea66838e04895f5c538bbc11459e33cbeb46624ccaf44bdd67cc\"" Nov 24 00:07:44.636982 containerd[1972]: time="2025-11-24T00:07:44.636926402Z" level=info msg="connecting to shim ab66731f06deea66838e04895f5c538bbc11459e33cbeb46624ccaf44bdd67cc" address="unix:///run/containerd/s/51ef7e71328705b53e218e9b8b35520d189742055afab3632bc817a8033fd943" protocol=ttrpc version=3 Nov 24 00:07:44.642944 containerd[1972]: time="2025-11-24T00:07:44.642885021Z" level=info msg="Container 5a8b20d1156a03fde8094a39901147c2e887ae8c582178adda5cca871ef7a1e2: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:07:44.647143 containerd[1972]: time="2025-11-24T00:07:44.647102652Z" level=info msg="Container 2aa8a020e793dc99ea26770368ef65d4a7d6a29d548cc2dd1db2295733cd6aa2: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:07:44.660795 containerd[1972]: time="2025-11-24T00:07:44.660742426Z" level=info msg="CreateContainer within sandbox \"244d918cab43ba513a4baeaef2303151ad6461f597ff692477dfb97fd50de166\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5a8b20d1156a03fde8094a39901147c2e887ae8c582178adda5cca871ef7a1e2\"" Nov 24 00:07:44.663942 containerd[1972]: time="2025-11-24T00:07:44.663843594Z" level=info msg="StartContainer for \"5a8b20d1156a03fde8094a39901147c2e887ae8c582178adda5cca871ef7a1e2\"" Nov 24 00:07:44.666290 containerd[1972]: time="2025-11-24T00:07:44.666249394Z" level=info msg="connecting to shim 5a8b20d1156a03fde8094a39901147c2e887ae8c582178adda5cca871ef7a1e2" address="unix:///run/containerd/s/c4f8ff64aa35e052980cb120e87ceacb5f34d70ae9d59b47929857dd03698eba" protocol=ttrpc version=3 Nov 24 00:07:44.668490 containerd[1972]: time="2025-11-24T00:07:44.668446933Z" level=info msg="CreateContainer within sandbox \"178c879106c4208d062a1ebd8fa7da1feb981dc2609a257ebeb7c7ea3c568db4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2aa8a020e793dc99ea26770368ef65d4a7d6a29d548cc2dd1db2295733cd6aa2\"" Nov 24 00:07:44.668827 systemd[1]: Started cri-containerd-ab66731f06deea66838e04895f5c538bbc11459e33cbeb46624ccaf44bdd67cc.scope - libcontainer container ab66731f06deea66838e04895f5c538bbc11459e33cbeb46624ccaf44bdd67cc. Nov 24 00:07:44.669690 containerd[1972]: time="2025-11-24T00:07:44.669659044Z" level=info msg="StartContainer for \"2aa8a020e793dc99ea26770368ef65d4a7d6a29d548cc2dd1db2295733cd6aa2\"" Nov 24 00:07:44.673608 containerd[1972]: time="2025-11-24T00:07:44.673538620Z" level=info msg="connecting to shim 2aa8a020e793dc99ea26770368ef65d4a7d6a29d548cc2dd1db2295733cd6aa2" address="unix:///run/containerd/s/588e2715898a6ef941babed710e3ac511110f270ded9ddf3dc2ddef1fea7123b" protocol=ttrpc version=3 Nov 24 00:07:44.701600 kubelet[2938]: E1124 00:07:44.701434 2938 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.87:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 24 00:07:44.720164 systemd[1]: Started cri-containerd-2aa8a020e793dc99ea26770368ef65d4a7d6a29d548cc2dd1db2295733cd6aa2.scope - libcontainer container 2aa8a020e793dc99ea26770368ef65d4a7d6a29d548cc2dd1db2295733cd6aa2. Nov 24 00:07:44.723849 systemd[1]: Started cri-containerd-5a8b20d1156a03fde8094a39901147c2e887ae8c582178adda5cca871ef7a1e2.scope - libcontainer container 5a8b20d1156a03fde8094a39901147c2e887ae8c582178adda5cca871ef7a1e2. Nov 24 00:07:44.731921 kubelet[2938]: E1124 00:07:44.731860 2938 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.87:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 24 00:07:44.852322 containerd[1972]: time="2025-11-24T00:07:44.852277076Z" level=info msg="StartContainer for \"ab66731f06deea66838e04895f5c538bbc11459e33cbeb46624ccaf44bdd67cc\" returns successfully" Nov 24 00:07:44.884794 containerd[1972]: time="2025-11-24T00:07:44.883777839Z" level=info msg="StartContainer for \"2aa8a020e793dc99ea26770368ef65d4a7d6a29d548cc2dd1db2295733cd6aa2\" returns successfully" Nov 24 00:07:44.885185 containerd[1972]: time="2025-11-24T00:07:44.885114881Z" level=info msg="StartContainer for \"5a8b20d1156a03fde8094a39901147c2e887ae8c582178adda5cca871ef7a1e2\" returns successfully" Nov 24 00:07:44.961622 kubelet[2938]: E1124 00:07:44.961461 2938 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-87&limit=500&resourceVersion=0\": dial tcp 172.31.16.87:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 24 00:07:45.051971 kubelet[2938]: E1124 00:07:45.051916 2938 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-87?timeout=10s\": dial tcp 172.31.16.87:6443: connect: connection refused" interval="1.6s" Nov 24 00:07:45.274958 kubelet[2938]: I1124 00:07:45.274923 2938 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-87" Nov 24 00:07:45.275453 kubelet[2938]: E1124 00:07:45.275348 2938 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.87:6443/api/v1/nodes\": dial tcp 172.31.16.87:6443: connect: connection refused" node="ip-172-31-16-87" Nov 24 00:07:45.749030 kubelet[2938]: E1124 00:07:45.748896 2938 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-87\" not found" node="ip-172-31-16-87" Nov 24 00:07:45.756681 kubelet[2938]: E1124 00:07:45.756647 2938 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-87\" not found" node="ip-172-31-16-87" Nov 24 00:07:45.758905 kubelet[2938]: E1124 00:07:45.758870 2938 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-87\" not found" node="ip-172-31-16-87" Nov 24 00:07:46.762269 kubelet[2938]: E1124 00:07:46.762234 2938 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-87\" not found" node="ip-172-31-16-87" Nov 24 00:07:46.762802 kubelet[2938]: E1124 00:07:46.762773 2938 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-87\" not found" node="ip-172-31-16-87" Nov 24 00:07:46.765390 kubelet[2938]: E1124 00:07:46.765360 2938 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-87\" not found" node="ip-172-31-16-87" Nov 24 00:07:46.878295 kubelet[2938]: I1124 00:07:46.878261 2938 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-87" Nov 24 00:07:47.763788 kubelet[2938]: E1124 00:07:47.763748 2938 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-87\" not found" node="ip-172-31-16-87" Nov 24 00:07:47.765186 kubelet[2938]: E1124 00:07:47.765156 2938 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-87\" not found" node="ip-172-31-16-87" Nov 24 00:07:48.805531 kubelet[2938]: E1124 00:07:48.805481 2938 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-87\" not found" node="ip-172-31-16-87" Nov 24 00:07:48.847446 kubelet[2938]: I1124 00:07:48.847404 2938 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-87" Nov 24 00:07:48.943394 kubelet[2938]: I1124 00:07:48.943339 2938 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-87" Nov 24 00:07:48.969918 kubelet[2938]: E1124 00:07:48.969858 2938 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-87\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-16-87" Nov 24 00:07:48.969918 kubelet[2938]: I1124 00:07:48.969895 2938 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-87" Nov 24 00:07:48.973183 kubelet[2938]: E1124 00:07:48.973116 2938 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-87\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-16-87" Nov 24 00:07:48.973183 kubelet[2938]: I1124 00:07:48.973193 2938 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-87" Nov 24 00:07:48.975764 kubelet[2938]: E1124 00:07:48.975546 2938 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-87\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-16-87" Nov 24 00:07:49.602276 kubelet[2938]: I1124 00:07:49.602160 2938 apiserver.go:52] "Watching apiserver" Nov 24 00:07:49.644340 kubelet[2938]: I1124 00:07:49.644192 2938 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 00:07:49.788221 kubelet[2938]: I1124 00:07:49.787030 2938 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-87" Nov 24 00:07:50.203450 kubelet[2938]: I1124 00:07:50.203412 2938 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-87" Nov 24 00:07:50.843776 kubelet[2938]: I1124 00:07:50.843731 2938 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-87" Nov 24 00:07:51.007313 systemd[1]: Reload requested from client PID 3212 ('systemctl') (unit session-9.scope)... Nov 24 00:07:51.007337 systemd[1]: Reloading... Nov 24 00:07:51.163646 zram_generator::config[3256]: No configuration found. Nov 24 00:07:51.516414 systemd[1]: Reloading finished in 508 ms. Nov 24 00:07:51.565963 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:07:51.581868 systemd[1]: kubelet.service: Deactivated successfully. Nov 24 00:07:51.582129 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:07:51.582193 systemd[1]: kubelet.service: Consumed 1.112s CPU time, 128.3M memory peak. Nov 24 00:07:51.587232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:07:51.975832 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:07:51.998362 (kubelet)[3316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 00:07:52.086142 kubelet[3316]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:07:52.086142 kubelet[3316]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 00:07:52.086142 kubelet[3316]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:07:52.086930 kubelet[3316]: I1124 00:07:52.086208 3316 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 00:07:52.096535 kubelet[3316]: I1124 00:07:52.096493 3316 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 24 00:07:52.097782 kubelet[3316]: I1124 00:07:52.096756 3316 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 00:07:52.097782 kubelet[3316]: I1124 00:07:52.097165 3316 server.go:956] "Client rotation is on, will bootstrap in background" Nov 24 00:07:52.102350 kubelet[3316]: I1124 00:07:52.102285 3316 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 24 00:07:52.109979 kubelet[3316]: I1124 00:07:52.109927 3316 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 00:07:52.123722 kubelet[3316]: I1124 00:07:52.123691 3316 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 00:07:52.129962 kubelet[3316]: I1124 00:07:52.129923 3316 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 00:07:52.133115 kubelet[3316]: I1124 00:07:52.133067 3316 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 00:07:52.133333 kubelet[3316]: I1124 00:07:52.133113 3316 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-87","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 00:07:52.133484 kubelet[3316]: I1124 00:07:52.133345 3316 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 00:07:52.133484 kubelet[3316]: I1124 00:07:52.133361 3316 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 00:07:52.133484 kubelet[3316]: I1124 00:07:52.133448 3316 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:07:52.134903 kubelet[3316]: I1124 00:07:52.133685 3316 kubelet.go:480] "Attempting to sync node with API server" Nov 24 00:07:52.134903 kubelet[3316]: I1124 00:07:52.133707 3316 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 00:07:52.134903 kubelet[3316]: I1124 00:07:52.133736 3316 kubelet.go:386] "Adding apiserver pod source" Nov 24 00:07:52.136233 kubelet[3316]: I1124 00:07:52.136017 3316 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 00:07:52.143794 kubelet[3316]: I1124 00:07:52.143764 3316 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 00:07:52.147481 kubelet[3316]: I1124 00:07:52.147445 3316 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 24 00:07:52.155065 kubelet[3316]: I1124 00:07:52.153988 3316 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 00:07:52.155065 kubelet[3316]: I1124 00:07:52.154048 3316 server.go:1289] "Started kubelet" Nov 24 00:07:52.161388 kubelet[3316]: I1124 00:07:52.160277 3316 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 00:07:52.168654 kubelet[3316]: I1124 00:07:52.167114 3316 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 00:07:52.178325 kubelet[3316]: I1124 00:07:52.178235 3316 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 00:07:52.183127 kubelet[3316]: I1124 00:07:52.181702 3316 server.go:317] "Adding debug handlers to kubelet server" Nov 24 00:07:52.191161 kubelet[3316]: I1124 00:07:52.191002 3316 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 00:07:52.200435 kubelet[3316]: I1124 00:07:52.198932 3316 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 00:07:52.206585 kubelet[3316]: I1124 00:07:52.206529 3316 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 00:07:52.207880 kubelet[3316]: I1124 00:07:52.206955 3316 reconciler.go:26] "Reconciler: start to sync state" Nov 24 00:07:52.209352 kubelet[3316]: I1124 00:07:52.208793 3316 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 00:07:52.218586 kubelet[3316]: I1124 00:07:52.217726 3316 factory.go:223] Registration of the systemd container factory successfully Nov 24 00:07:52.218586 kubelet[3316]: I1124 00:07:52.217852 3316 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 00:07:52.224424 kubelet[3316]: E1124 00:07:52.224384 3316 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 00:07:52.228629 kubelet[3316]: I1124 00:07:52.227469 3316 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 24 00:07:52.233296 kubelet[3316]: I1124 00:07:52.233162 3316 factory.go:223] Registration of the containerd container factory successfully Nov 24 00:07:52.236501 kubelet[3316]: I1124 00:07:52.235801 3316 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 24 00:07:52.236501 kubelet[3316]: I1124 00:07:52.235832 3316 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 24 00:07:52.236501 kubelet[3316]: I1124 00:07:52.235862 3316 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 00:07:52.236501 kubelet[3316]: I1124 00:07:52.235870 3316 kubelet.go:2436] "Starting kubelet main sync loop" Nov 24 00:07:52.236501 kubelet[3316]: E1124 00:07:52.235924 3316 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 00:07:52.324611 kubelet[3316]: I1124 00:07:52.324582 3316 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 00:07:52.324611 kubelet[3316]: I1124 00:07:52.324651 3316 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 00:07:52.324611 kubelet[3316]: I1124 00:07:52.324682 3316 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:07:52.324611 kubelet[3316]: I1124 00:07:52.324920 3316 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 24 00:07:52.324611 kubelet[3316]: I1124 00:07:52.324933 3316 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 24 00:07:52.324611 kubelet[3316]: I1124 00:07:52.324974 3316 policy_none.go:49] "None policy: Start" Nov 24 00:07:52.324611 kubelet[3316]: I1124 00:07:52.324990 3316 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 00:07:52.324611 kubelet[3316]: I1124 00:07:52.325002 3316 state_mem.go:35] "Initializing new in-memory state store" Nov 24 00:07:52.324611 kubelet[3316]: I1124 00:07:52.325164 3316 state_mem.go:75] "Updated machine memory state" Nov 24 00:07:52.336474 kubelet[3316]: E1124 00:07:52.336436 3316 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 24 00:07:52.337186 kubelet[3316]: E1124 00:07:52.337160 3316 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 24 00:07:52.337682 kubelet[3316]: I1124 00:07:52.337573 3316 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 00:07:52.337682 kubelet[3316]: I1124 00:07:52.337603 3316 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 00:07:52.341385 kubelet[3316]: I1124 00:07:52.340172 3316 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 00:07:52.346806 kubelet[3316]: E1124 00:07:52.346776 3316 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 00:07:52.459175 kubelet[3316]: I1124 00:07:52.459129 3316 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-87" Nov 24 00:07:52.475589 kubelet[3316]: I1124 00:07:52.475493 3316 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-16-87" Nov 24 00:07:52.475749 kubelet[3316]: I1124 00:07:52.475622 3316 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-87" Nov 24 00:07:52.544595 kubelet[3316]: I1124 00:07:52.542803 3316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-87" Nov 24 00:07:52.545087 kubelet[3316]: I1124 00:07:52.545064 3316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-87" Nov 24 00:07:52.545773 kubelet[3316]: I1124 00:07:52.545359 3316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-87" Nov 24 00:07:52.563034 kubelet[3316]: E1124 00:07:52.562432 3316 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-87\" already exists" pod="kube-system/kube-scheduler-ip-172-31-16-87" Nov 24 00:07:52.563494 kubelet[3316]: E1124 00:07:52.563005 3316 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-87\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-16-87" Nov 24 00:07:52.564602 kubelet[3316]: E1124 00:07:52.564496 3316 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-87\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-87" Nov 24 00:07:52.614639 kubelet[3316]: I1124 00:07:52.613610 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e63252b0ab9750cc79304959fc044f32-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-87\" (UID: \"e63252b0ab9750cc79304959fc044f32\") " pod="kube-system/kube-scheduler-ip-172-31-16-87" Nov 24 00:07:52.614842 kubelet[3316]: I1124 00:07:52.614661 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/924af24797d9d85eee00432aa4a8f9ab-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-87\" (UID: \"924af24797d9d85eee00432aa4a8f9ab\") " pod="kube-system/kube-apiserver-ip-172-31-16-87" Nov 24 00:07:52.614842 kubelet[3316]: I1124 00:07:52.614700 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/924af24797d9d85eee00432aa4a8f9ab-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-87\" (UID: \"924af24797d9d85eee00432aa4a8f9ab\") " pod="kube-system/kube-apiserver-ip-172-31-16-87" Nov 24 00:07:52.614842 kubelet[3316]: I1124 00:07:52.614727 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1368c83d8d7188e4557cd2f81016c96-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-87\" (UID: \"d1368c83d8d7188e4557cd2f81016c96\") " pod="kube-system/kube-controller-manager-ip-172-31-16-87" Nov 24 00:07:52.614842 kubelet[3316]: I1124 00:07:52.614749 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1368c83d8d7188e4557cd2f81016c96-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-87\" (UID: \"d1368c83d8d7188e4557cd2f81016c96\") " pod="kube-system/kube-controller-manager-ip-172-31-16-87" Nov 24 00:07:52.614842 kubelet[3316]: I1124 00:07:52.614775 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1368c83d8d7188e4557cd2f81016c96-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-87\" (UID: \"d1368c83d8d7188e4557cd2f81016c96\") " pod="kube-system/kube-controller-manager-ip-172-31-16-87" Nov 24 00:07:52.615030 kubelet[3316]: I1124 00:07:52.614798 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/924af24797d9d85eee00432aa4a8f9ab-ca-certs\") pod \"kube-apiserver-ip-172-31-16-87\" (UID: \"924af24797d9d85eee00432aa4a8f9ab\") " pod="kube-system/kube-apiserver-ip-172-31-16-87" Nov 24 00:07:52.615030 kubelet[3316]: I1124 00:07:52.614822 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1368c83d8d7188e4557cd2f81016c96-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-87\" (UID: \"d1368c83d8d7188e4557cd2f81016c96\") " pod="kube-system/kube-controller-manager-ip-172-31-16-87" Nov 24 00:07:52.615030 kubelet[3316]: I1124 00:07:52.614872 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1368c83d8d7188e4557cd2f81016c96-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-87\" (UID: \"d1368c83d8d7188e4557cd2f81016c96\") " pod="kube-system/kube-controller-manager-ip-172-31-16-87" Nov 24 00:07:52.617895 update_engine[1961]: I20251124 00:07:52.617799 1961 update_attempter.cc:509] Updating boot flags... Nov 24 00:07:53.147984 kubelet[3316]: I1124 00:07:53.147929 3316 apiserver.go:52] "Watching apiserver" Nov 24 00:07:53.206964 kubelet[3316]: I1124 00:07:53.206904 3316 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 00:07:53.303290 kubelet[3316]: I1124 00:07:53.302657 3316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-87" Nov 24 00:07:53.303977 kubelet[3316]: I1124 00:07:53.303947 3316 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-87" Nov 24 00:07:53.330749 kubelet[3316]: E1124 00:07:53.330702 3316 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-87\" already exists" pod="kube-system/kube-scheduler-ip-172-31-16-87" Nov 24 00:07:53.331475 kubelet[3316]: E1124 00:07:53.331187 3316 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-87\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-87" Nov 24 00:07:53.745394 kubelet[3316]: I1124 00:07:53.744988 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-87" podStartSLOduration=4.74496482 podStartE2EDuration="4.74496482s" podCreationTimestamp="2025-11-24 00:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:07:53.660782211 +0000 UTC m=+1.650804846" watchObservedRunningTime="2025-11-24 00:07:53.74496482 +0000 UTC m=+1.734987460" Nov 24 00:07:53.841450 kubelet[3316]: I1124 00:07:53.840413 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-87" podStartSLOduration=3.840393427 podStartE2EDuration="3.840393427s" podCreationTimestamp="2025-11-24 00:07:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:07:53.75624523 +0000 UTC m=+1.746267863" watchObservedRunningTime="2025-11-24 00:07:53.840393427 +0000 UTC m=+1.830416064" Nov 24 00:07:53.841450 kubelet[3316]: I1124 00:07:53.840521 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-87" podStartSLOduration=3.84051331 podStartE2EDuration="3.84051331s" podCreationTimestamp="2025-11-24 00:07:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:07:53.84016187 +0000 UTC m=+1.830184506" watchObservedRunningTime="2025-11-24 00:07:53.84051331 +0000 UTC m=+1.830535945" Nov 24 00:07:56.246170 kubelet[3316]: I1124 00:07:56.246135 3316 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 24 00:07:56.246908 containerd[1972]: time="2025-11-24T00:07:56.246469240Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 24 00:07:56.247579 kubelet[3316]: I1124 00:07:56.247303 3316 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 24 00:07:57.322001 systemd[1]: Created slice kubepods-besteffort-pod0548154c_cd2c_4b9d_83cf_4c28f2895072.slice - libcontainer container kubepods-besteffort-pod0548154c_cd2c_4b9d_83cf_4c28f2895072.slice. Nov 24 00:07:57.353076 kubelet[3316]: I1124 00:07:57.353035 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0548154c-cd2c-4b9d-83cf-4c28f2895072-kube-proxy\") pod \"kube-proxy-kkmjr\" (UID: \"0548154c-cd2c-4b9d-83cf-4c28f2895072\") " pod="kube-system/kube-proxy-kkmjr" Nov 24 00:07:57.353558 kubelet[3316]: I1124 00:07:57.353084 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0548154c-cd2c-4b9d-83cf-4c28f2895072-lib-modules\") pod \"kube-proxy-kkmjr\" (UID: \"0548154c-cd2c-4b9d-83cf-4c28f2895072\") " pod="kube-system/kube-proxy-kkmjr" Nov 24 00:07:57.353558 kubelet[3316]: I1124 00:07:57.353128 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0548154c-cd2c-4b9d-83cf-4c28f2895072-xtables-lock\") pod \"kube-proxy-kkmjr\" (UID: \"0548154c-cd2c-4b9d-83cf-4c28f2895072\") " pod="kube-system/kube-proxy-kkmjr" Nov 24 00:07:57.353558 kubelet[3316]: I1124 00:07:57.353153 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtfkg\" (UniqueName: \"kubernetes.io/projected/0548154c-cd2c-4b9d-83cf-4c28f2895072-kube-api-access-qtfkg\") pod \"kube-proxy-kkmjr\" (UID: \"0548154c-cd2c-4b9d-83cf-4c28f2895072\") " pod="kube-system/kube-proxy-kkmjr" Nov 24 00:07:57.449845 systemd[1]: Created slice kubepods-besteffort-podeb9e51cc_0859_4462_8e20_303778b4efc4.slice - libcontainer container kubepods-besteffort-podeb9e51cc_0859_4462_8e20_303778b4efc4.slice. Nov 24 00:07:57.460217 kubelet[3316]: I1124 00:07:57.460157 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eb9e51cc-0859-4462-8e20-303778b4efc4-var-lib-calico\") pod \"tigera-operator-7dcd859c48-pjjcb\" (UID: \"eb9e51cc-0859-4462-8e20-303778b4efc4\") " pod="tigera-operator/tigera-operator-7dcd859c48-pjjcb" Nov 24 00:07:57.463441 kubelet[3316]: I1124 00:07:57.463295 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8msz\" (UniqueName: \"kubernetes.io/projected/eb9e51cc-0859-4462-8e20-303778b4efc4-kube-api-access-w8msz\") pod \"tigera-operator-7dcd859c48-pjjcb\" (UID: \"eb9e51cc-0859-4462-8e20-303778b4efc4\") " pod="tigera-operator/tigera-operator-7dcd859c48-pjjcb" Nov 24 00:07:57.633528 containerd[1972]: time="2025-11-24T00:07:57.633391045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kkmjr,Uid:0548154c-cd2c-4b9d-83cf-4c28f2895072,Namespace:kube-system,Attempt:0,}" Nov 24 00:07:57.667255 containerd[1972]: time="2025-11-24T00:07:57.667130163Z" level=info msg="connecting to shim 71e47ae6f0cbe688482c6a52e17eab35d032842f5b98c939e524a5a0cfde8ff0" address="unix:///run/containerd/s/abf32a7b4e957fe3280b96e4612b44d3d5db9529d2e3caae8930a4ff3b1c2a11" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:07:57.705880 systemd[1]: Started cri-containerd-71e47ae6f0cbe688482c6a52e17eab35d032842f5b98c939e524a5a0cfde8ff0.scope - libcontainer container 71e47ae6f0cbe688482c6a52e17eab35d032842f5b98c939e524a5a0cfde8ff0. Nov 24 00:07:57.743377 containerd[1972]: time="2025-11-24T00:07:57.743326293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kkmjr,Uid:0548154c-cd2c-4b9d-83cf-4c28f2895072,Namespace:kube-system,Attempt:0,} returns sandbox id \"71e47ae6f0cbe688482c6a52e17eab35d032842f5b98c939e524a5a0cfde8ff0\"" Nov 24 00:07:57.753512 containerd[1972]: time="2025-11-24T00:07:57.753453309Z" level=info msg="CreateContainer within sandbox \"71e47ae6f0cbe688482c6a52e17eab35d032842f5b98c939e524a5a0cfde8ff0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 24 00:07:57.757588 containerd[1972]: time="2025-11-24T00:07:57.757521018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-pjjcb,Uid:eb9e51cc-0859-4462-8e20-303778b4efc4,Namespace:tigera-operator,Attempt:0,}" Nov 24 00:07:57.782625 containerd[1972]: time="2025-11-24T00:07:57.781231878Z" level=info msg="Container a6e2bc1ac9f96731cc4ff62277b6984e629129de282cb077e7fa6a8cf83c1dc0: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:07:57.805633 containerd[1972]: time="2025-11-24T00:07:57.805518417Z" level=info msg="CreateContainer within sandbox \"71e47ae6f0cbe688482c6a52e17eab35d032842f5b98c939e524a5a0cfde8ff0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a6e2bc1ac9f96731cc4ff62277b6984e629129de282cb077e7fa6a8cf83c1dc0\"" Nov 24 00:07:57.807125 containerd[1972]: time="2025-11-24T00:07:57.806815256Z" level=info msg="connecting to shim 49f2fcd7f98ed56e4654c19eae452c366b40550a70be617544f96333f1ced142" address="unix:///run/containerd/s/c7e793adc2a60379c6220bfdea6c12b7244d9f630a6d921ac1c8d3593d2d2f71" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:07:57.807462 containerd[1972]: time="2025-11-24T00:07:57.806938227Z" level=info msg="StartContainer for \"a6e2bc1ac9f96731cc4ff62277b6984e629129de282cb077e7fa6a8cf83c1dc0\"" Nov 24 00:07:57.810966 containerd[1972]: time="2025-11-24T00:07:57.810916253Z" level=info msg="connecting to shim a6e2bc1ac9f96731cc4ff62277b6984e629129de282cb077e7fa6a8cf83c1dc0" address="unix:///run/containerd/s/abf32a7b4e957fe3280b96e4612b44d3d5db9529d2e3caae8930a4ff3b1c2a11" protocol=ttrpc version=3 Nov 24 00:07:57.861067 systemd[1]: Started cri-containerd-49f2fcd7f98ed56e4654c19eae452c366b40550a70be617544f96333f1ced142.scope - libcontainer container 49f2fcd7f98ed56e4654c19eae452c366b40550a70be617544f96333f1ced142. Nov 24 00:07:57.864290 systemd[1]: Started cri-containerd-a6e2bc1ac9f96731cc4ff62277b6984e629129de282cb077e7fa6a8cf83c1dc0.scope - libcontainer container a6e2bc1ac9f96731cc4ff62277b6984e629129de282cb077e7fa6a8cf83c1dc0. Nov 24 00:07:57.962521 containerd[1972]: time="2025-11-24T00:07:57.962000801Z" level=info msg="StartContainer for \"a6e2bc1ac9f96731cc4ff62277b6984e629129de282cb077e7fa6a8cf83c1dc0\" returns successfully" Nov 24 00:07:57.964407 containerd[1972]: time="2025-11-24T00:07:57.964357054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-pjjcb,Uid:eb9e51cc-0859-4462-8e20-303778b4efc4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"49f2fcd7f98ed56e4654c19eae452c366b40550a70be617544f96333f1ced142\"" Nov 24 00:07:57.966950 containerd[1972]: time="2025-11-24T00:07:57.966909506Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 24 00:07:58.334579 kubelet[3316]: I1124 00:07:58.334508 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kkmjr" podStartSLOduration=1.334491463 podStartE2EDuration="1.334491463s" podCreationTimestamp="2025-11-24 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:07:58.334320194 +0000 UTC m=+6.324342829" watchObservedRunningTime="2025-11-24 00:07:58.334491463 +0000 UTC m=+6.324514097" Nov 24 00:07:59.077923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount770191930.mount: Deactivated successfully. Nov 24 00:08:00.248465 containerd[1972]: time="2025-11-24T00:08:00.248236397Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:08:00.250418 containerd[1972]: time="2025-11-24T00:08:00.250166944Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 24 00:08:00.252783 containerd[1972]: time="2025-11-24T00:08:00.252733923Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:08:00.276410 containerd[1972]: time="2025-11-24T00:08:00.276339910Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:08:00.277507 containerd[1972]: time="2025-11-24T00:08:00.277457517Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.310506988s" Nov 24 00:08:00.277507 containerd[1972]: time="2025-11-24T00:08:00.277505056Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 24 00:08:00.287495 containerd[1972]: time="2025-11-24T00:08:00.287295600Z" level=info msg="CreateContainer within sandbox \"49f2fcd7f98ed56e4654c19eae452c366b40550a70be617544f96333f1ced142\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 24 00:08:00.313588 containerd[1972]: time="2025-11-24T00:08:00.311273132Z" level=info msg="Container 7d8f7c05d28c844d44d0117c65af50884ef08b6659b93bda4695570777537237: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:08:00.319483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4239049567.mount: Deactivated successfully. Nov 24 00:08:00.331326 containerd[1972]: time="2025-11-24T00:08:00.331254281Z" level=info msg="CreateContainer within sandbox \"49f2fcd7f98ed56e4654c19eae452c366b40550a70be617544f96333f1ced142\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7d8f7c05d28c844d44d0117c65af50884ef08b6659b93bda4695570777537237\"" Nov 24 00:08:00.342296 containerd[1972]: time="2025-11-24T00:08:00.341807323Z" level=info msg="StartContainer for \"7d8f7c05d28c844d44d0117c65af50884ef08b6659b93bda4695570777537237\"" Nov 24 00:08:00.343929 containerd[1972]: time="2025-11-24T00:08:00.343860633Z" level=info msg="connecting to shim 7d8f7c05d28c844d44d0117c65af50884ef08b6659b93bda4695570777537237" address="unix:///run/containerd/s/c7e793adc2a60379c6220bfdea6c12b7244d9f630a6d921ac1c8d3593d2d2f71" protocol=ttrpc version=3 Nov 24 00:08:00.396748 systemd[1]: Started cri-containerd-7d8f7c05d28c844d44d0117c65af50884ef08b6659b93bda4695570777537237.scope - libcontainer container 7d8f7c05d28c844d44d0117c65af50884ef08b6659b93bda4695570777537237. Nov 24 00:08:00.442089 containerd[1972]: time="2025-11-24T00:08:00.442048311Z" level=info msg="StartContainer for \"7d8f7c05d28c844d44d0117c65af50884ef08b6659b93bda4695570777537237\" returns successfully" Nov 24 00:08:02.171710 kubelet[3316]: I1124 00:08:02.171606 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-pjjcb" podStartSLOduration=2.854791576 podStartE2EDuration="5.167281925s" podCreationTimestamp="2025-11-24 00:07:57 +0000 UTC" firstStartedPulling="2025-11-24 00:07:57.966176832 +0000 UTC m=+5.956199464" lastFinishedPulling="2025-11-24 00:08:00.2786672 +0000 UTC m=+8.268689813" observedRunningTime="2025-11-24 00:08:01.784196621 +0000 UTC m=+9.774219258" watchObservedRunningTime="2025-11-24 00:08:02.167281925 +0000 UTC m=+10.157304561" Nov 24 00:08:09.458452 sudo[2371]: pam_unix(sudo:session): session closed for user root Nov 24 00:08:09.482024 sshd[2370]: Connection closed by 139.178.68.195 port 48882 Nov 24 00:08:09.483194 sshd-session[2367]: pam_unix(sshd:session): session closed for user core Nov 24 00:08:09.491438 systemd[1]: sshd@8-172.31.16.87:22-139.178.68.195:48882.service: Deactivated successfully. Nov 24 00:08:09.494543 systemd[1]: session-9.scope: Deactivated successfully. Nov 24 00:08:09.494909 systemd[1]: session-9.scope: Consumed 6.067s CPU time, 151.9M memory peak. Nov 24 00:08:09.497777 systemd-logind[1954]: Session 9 logged out. Waiting for processes to exit. Nov 24 00:08:09.503626 systemd-logind[1954]: Removed session 9. Nov 24 00:08:17.095578 systemd[1]: Created slice kubepods-besteffort-pod888a7466_9a34_485b_9c69_ab65509ab1cc.slice - libcontainer container kubepods-besteffort-pod888a7466_9a34_485b_9c69_ab65509ab1cc.slice. Nov 24 00:08:17.197057 kubelet[3316]: I1124 00:08:17.196900 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/888a7466-9a34-485b-9c69-ab65509ab1cc-tigera-ca-bundle\") pod \"calico-typha-659f7dd8c5-6tcn4\" (UID: \"888a7466-9a34-485b-9c69-ab65509ab1cc\") " pod="calico-system/calico-typha-659f7dd8c5-6tcn4" Nov 24 00:08:17.197057 kubelet[3316]: I1124 00:08:17.196960 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq8hj\" (UniqueName: \"kubernetes.io/projected/888a7466-9a34-485b-9c69-ab65509ab1cc-kube-api-access-vq8hj\") pod \"calico-typha-659f7dd8c5-6tcn4\" (UID: \"888a7466-9a34-485b-9c69-ab65509ab1cc\") " pod="calico-system/calico-typha-659f7dd8c5-6tcn4" Nov 24 00:08:17.197057 kubelet[3316]: I1124 00:08:17.196984 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/888a7466-9a34-485b-9c69-ab65509ab1cc-typha-certs\") pod \"calico-typha-659f7dd8c5-6tcn4\" (UID: \"888a7466-9a34-485b-9c69-ab65509ab1cc\") " pod="calico-system/calico-typha-659f7dd8c5-6tcn4" Nov 24 00:08:17.343212 systemd[1]: Created slice kubepods-besteffort-podd28df994_100d_4c16_a65b_d89fe73d9eed.slice - libcontainer container kubepods-besteffort-podd28df994_100d_4c16_a65b_d89fe73d9eed.slice. Nov 24 00:08:17.398116 kubelet[3316]: I1124 00:08:17.397965 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d28df994-100d-4c16-a65b-d89fe73d9eed-node-certs\") pod \"calico-node-hktjj\" (UID: \"d28df994-100d-4c16-a65b-d89fe73d9eed\") " pod="calico-system/calico-node-hktjj" Nov 24 00:08:17.398272 kubelet[3316]: I1124 00:08:17.398126 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d28df994-100d-4c16-a65b-d89fe73d9eed-tigera-ca-bundle\") pod \"calico-node-hktjj\" (UID: \"d28df994-100d-4c16-a65b-d89fe73d9eed\") " pod="calico-system/calico-node-hktjj" Nov 24 00:08:17.398272 kubelet[3316]: I1124 00:08:17.398162 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d28df994-100d-4c16-a65b-d89fe73d9eed-flexvol-driver-host\") pod \"calico-node-hktjj\" (UID: \"d28df994-100d-4c16-a65b-d89fe73d9eed\") " pod="calico-system/calico-node-hktjj" Nov 24 00:08:17.398845 kubelet[3316]: I1124 00:08:17.398806 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d28df994-100d-4c16-a65b-d89fe73d9eed-policysync\") pod \"calico-node-hktjj\" (UID: \"d28df994-100d-4c16-a65b-d89fe73d9eed\") " pod="calico-system/calico-node-hktjj" Nov 24 00:08:17.400581 kubelet[3316]: I1124 00:08:17.399622 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d28df994-100d-4c16-a65b-d89fe73d9eed-xtables-lock\") pod \"calico-node-hktjj\" (UID: \"d28df994-100d-4c16-a65b-d89fe73d9eed\") " pod="calico-system/calico-node-hktjj" Nov 24 00:08:17.400581 kubelet[3316]: I1124 00:08:17.399750 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d28df994-100d-4c16-a65b-d89fe73d9eed-cni-bin-dir\") pod \"calico-node-hktjj\" (UID: \"d28df994-100d-4c16-a65b-d89fe73d9eed\") " pod="calico-system/calico-node-hktjj" Nov 24 00:08:17.400581 kubelet[3316]: I1124 00:08:17.399775 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d28df994-100d-4c16-a65b-d89fe73d9eed-lib-modules\") pod \"calico-node-hktjj\" (UID: \"d28df994-100d-4c16-a65b-d89fe73d9eed\") " pod="calico-system/calico-node-hktjj" Nov 24 00:08:17.400581 kubelet[3316]: I1124 00:08:17.399848 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d28df994-100d-4c16-a65b-d89fe73d9eed-var-lib-calico\") pod \"calico-node-hktjj\" (UID: \"d28df994-100d-4c16-a65b-d89fe73d9eed\") " pod="calico-system/calico-node-hktjj" Nov 24 00:08:17.400581 kubelet[3316]: I1124 00:08:17.399872 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d28df994-100d-4c16-a65b-d89fe73d9eed-var-run-calico\") pod \"calico-node-hktjj\" (UID: \"d28df994-100d-4c16-a65b-d89fe73d9eed\") " pod="calico-system/calico-node-hktjj" Nov 24 00:08:17.400879 kubelet[3316]: I1124 00:08:17.399937 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d28df994-100d-4c16-a65b-d89fe73d9eed-cni-net-dir\") pod \"calico-node-hktjj\" (UID: \"d28df994-100d-4c16-a65b-d89fe73d9eed\") " pod="calico-system/calico-node-hktjj" Nov 24 00:08:17.400879 kubelet[3316]: I1124 00:08:17.399990 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lzq9\" (UniqueName: \"kubernetes.io/projected/d28df994-100d-4c16-a65b-d89fe73d9eed-kube-api-access-4lzq9\") pod \"calico-node-hktjj\" (UID: \"d28df994-100d-4c16-a65b-d89fe73d9eed\") " pod="calico-system/calico-node-hktjj" Nov 24 00:08:17.400879 kubelet[3316]: I1124 00:08:17.400014 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d28df994-100d-4c16-a65b-d89fe73d9eed-cni-log-dir\") pod \"calico-node-hktjj\" (UID: \"d28df994-100d-4c16-a65b-d89fe73d9eed\") " pod="calico-system/calico-node-hktjj" Nov 24 00:08:17.412092 containerd[1972]: time="2025-11-24T00:08:17.412025690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-659f7dd8c5-6tcn4,Uid:888a7466-9a34-485b-9c69-ab65509ab1cc,Namespace:calico-system,Attempt:0,}" Nov 24 00:08:17.462722 containerd[1972]: time="2025-11-24T00:08:17.462638252Z" level=info msg="connecting to shim 72bd85101202d85495d67c59e949cd5f633daee2c78d0a5e707ed800ac9d3d65" address="unix:///run/containerd/s/d5fedda7a6086b86f8a3bfccdf89ce1d808511816e7e75bb42d927e81f91e469" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:08:17.537594 kubelet[3316]: E1124 00:08:17.531022 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.537594 kubelet[3316]: W1124 00:08:17.531075 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.537944 kubelet[3316]: E1124 00:08:17.537908 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.546160 systemd[1]: Started cri-containerd-72bd85101202d85495d67c59e949cd5f633daee2c78d0a5e707ed800ac9d3d65.scope - libcontainer container 72bd85101202d85495d67c59e949cd5f633daee2c78d0a5e707ed800ac9d3d65. Nov 24 00:08:17.552151 kubelet[3316]: E1124 00:08:17.551137 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.552151 kubelet[3316]: W1124 00:08:17.551173 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.552151 kubelet[3316]: E1124 00:08:17.551221 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.557073 kubelet[3316]: E1124 00:08:17.556700 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.557073 kubelet[3316]: W1124 00:08:17.556735 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.557073 kubelet[3316]: E1124 00:08:17.556790 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.614713 kubelet[3316]: E1124 00:08:17.614653 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-44qlh" podUID="a7f2741e-c2a8-4e97-9679-431279b978f1" Nov 24 00:08:17.619999 kubelet[3316]: E1124 00:08:17.619872 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.619999 kubelet[3316]: W1124 00:08:17.619903 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.619999 kubelet[3316]: E1124 00:08:17.619931 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.621741 kubelet[3316]: E1124 00:08:17.621623 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.621741 kubelet[3316]: W1124 00:08:17.621647 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.621741 kubelet[3316]: E1124 00:08:17.621673 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.622190 kubelet[3316]: E1124 00:08:17.622173 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.622350 kubelet[3316]: W1124 00:08:17.622272 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.622350 kubelet[3316]: E1124 00:08:17.622294 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.622875 kubelet[3316]: E1124 00:08:17.622771 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.622875 kubelet[3316]: W1124 00:08:17.622785 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.622875 kubelet[3316]: E1124 00:08:17.622803 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.623487 kubelet[3316]: E1124 00:08:17.623471 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.624710 kubelet[3316]: W1124 00:08:17.624618 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.624710 kubelet[3316]: E1124 00:08:17.624643 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.627001 kubelet[3316]: E1124 00:08:17.626589 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.627001 kubelet[3316]: W1124 00:08:17.626614 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.627001 kubelet[3316]: E1124 00:08:17.626632 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.627762 kubelet[3316]: E1124 00:08:17.627742 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.628091 kubelet[3316]: W1124 00:08:17.627874 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.628091 kubelet[3316]: E1124 00:08:17.627895 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.629813 kubelet[3316]: E1124 00:08:17.629644 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.629813 kubelet[3316]: W1124 00:08:17.629663 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.629813 kubelet[3316]: E1124 00:08:17.629681 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.630034 kubelet[3316]: E1124 00:08:17.630023 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.630096 kubelet[3316]: W1124 00:08:17.630085 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.630272 kubelet[3316]: E1124 00:08:17.630159 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.630385 kubelet[3316]: E1124 00:08:17.630373 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.630447 kubelet[3316]: W1124 00:08:17.630437 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.630511 kubelet[3316]: E1124 00:08:17.630501 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.630793 kubelet[3316]: E1124 00:08:17.630770 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.630938 kubelet[3316]: W1124 00:08:17.630872 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.630938 kubelet[3316]: E1124 00:08:17.630890 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.631675 kubelet[3316]: E1124 00:08:17.631355 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.631675 kubelet[3316]: W1124 00:08:17.631591 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.631675 kubelet[3316]: E1124 00:08:17.631609 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.632972 kubelet[3316]: E1124 00:08:17.632003 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.633143 kubelet[3316]: W1124 00:08:17.633070 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.633143 kubelet[3316]: E1124 00:08:17.633092 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.633493 kubelet[3316]: E1124 00:08:17.633479 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.633702 kubelet[3316]: W1124 00:08:17.633631 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.633702 kubelet[3316]: E1124 00:08:17.633651 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.634097 kubelet[3316]: E1124 00:08:17.634021 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.634097 kubelet[3316]: W1124 00:08:17.634034 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.634097 kubelet[3316]: E1124 00:08:17.634047 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.634661 kubelet[3316]: E1124 00:08:17.634522 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.634661 kubelet[3316]: W1124 00:08:17.634545 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.635025 kubelet[3316]: E1124 00:08:17.634859 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.635636 kubelet[3316]: E1124 00:08:17.635621 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.635782 kubelet[3316]: W1124 00:08:17.635714 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.635782 kubelet[3316]: E1124 00:08:17.635733 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.637581 kubelet[3316]: E1124 00:08:17.636117 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.637581 kubelet[3316]: W1124 00:08:17.636131 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.637581 kubelet[3316]: E1124 00:08:17.636145 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.637999 kubelet[3316]: E1124 00:08:17.637982 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.638171 kubelet[3316]: W1124 00:08:17.638082 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.638171 kubelet[3316]: E1124 00:08:17.638103 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.638597 kubelet[3316]: E1124 00:08:17.638466 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.638597 kubelet[3316]: W1124 00:08:17.638478 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.638597 kubelet[3316]: E1124 00:08:17.638492 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.666767 containerd[1972]: time="2025-11-24T00:08:17.665955575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hktjj,Uid:d28df994-100d-4c16-a65b-d89fe73d9eed,Namespace:calico-system,Attempt:0,}" Nov 24 00:08:17.704089 kubelet[3316]: E1124 00:08:17.704019 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.704682 kubelet[3316]: W1124 00:08:17.704174 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.704682 kubelet[3316]: E1124 00:08:17.704211 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.704682 kubelet[3316]: I1124 00:08:17.704481 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a7f2741e-c2a8-4e97-9679-431279b978f1-registration-dir\") pod \"csi-node-driver-44qlh\" (UID: \"a7f2741e-c2a8-4e97-9679-431279b978f1\") " pod="calico-system/csi-node-driver-44qlh" Nov 24 00:08:17.708753 kubelet[3316]: E1124 00:08:17.706861 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.708753 kubelet[3316]: W1124 00:08:17.708628 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.708753 kubelet[3316]: E1124 00:08:17.708666 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.708753 kubelet[3316]: I1124 00:08:17.708712 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a7f2741e-c2a8-4e97-9679-431279b978f1-socket-dir\") pod \"csi-node-driver-44qlh\" (UID: \"a7f2741e-c2a8-4e97-9679-431279b978f1\") " pod="calico-system/csi-node-driver-44qlh" Nov 24 00:08:17.709738 kubelet[3316]: E1124 00:08:17.709349 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.709738 kubelet[3316]: W1124 00:08:17.709440 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.709738 kubelet[3316]: E1124 00:08:17.709465 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.710956 kubelet[3316]: E1124 00:08:17.710083 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.710956 kubelet[3316]: W1124 00:08:17.710119 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.710956 kubelet[3316]: E1124 00:08:17.710138 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.710956 kubelet[3316]: E1124 00:08:17.710447 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.710956 kubelet[3316]: W1124 00:08:17.710457 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.710956 kubelet[3316]: E1124 00:08:17.710512 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.710956 kubelet[3316]: I1124 00:08:17.710551 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a7f2741e-c2a8-4e97-9679-431279b978f1-varrun\") pod \"csi-node-driver-44qlh\" (UID: \"a7f2741e-c2a8-4e97-9679-431279b978f1\") " pod="calico-system/csi-node-driver-44qlh" Nov 24 00:08:17.713378 kubelet[3316]: E1124 00:08:17.712273 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.713378 kubelet[3316]: W1124 00:08:17.712295 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.713378 kubelet[3316]: E1124 00:08:17.712314 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.715705 kubelet[3316]: E1124 00:08:17.713853 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.715705 kubelet[3316]: W1124 00:08:17.713869 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.715705 kubelet[3316]: E1124 00:08:17.713886 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.716748 kubelet[3316]: E1124 00:08:17.716590 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.716748 kubelet[3316]: W1124 00:08:17.716613 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.716748 kubelet[3316]: E1124 00:08:17.716641 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.717678 kubelet[3316]: I1124 00:08:17.717280 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7f2741e-c2a8-4e97-9679-431279b978f1-kubelet-dir\") pod \"csi-node-driver-44qlh\" (UID: \"a7f2741e-c2a8-4e97-9679-431279b978f1\") " pod="calico-system/csi-node-driver-44qlh" Nov 24 00:08:17.718657 kubelet[3316]: E1124 00:08:17.718530 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.720728 kubelet[3316]: W1124 00:08:17.718726 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.720728 kubelet[3316]: E1124 00:08:17.718753 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.720728 kubelet[3316]: E1124 00:08:17.719693 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.720728 kubelet[3316]: W1124 00:08:17.719707 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.720728 kubelet[3316]: E1124 00:08:17.719726 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.720728 kubelet[3316]: I1124 00:08:17.719886 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvb8n\" (UniqueName: \"kubernetes.io/projected/a7f2741e-c2a8-4e97-9679-431279b978f1-kube-api-access-bvb8n\") pod \"csi-node-driver-44qlh\" (UID: \"a7f2741e-c2a8-4e97-9679-431279b978f1\") " pod="calico-system/csi-node-driver-44qlh" Nov 24 00:08:17.722658 kubelet[3316]: E1124 00:08:17.720909 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.722658 kubelet[3316]: W1124 00:08:17.720921 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.722658 kubelet[3316]: E1124 00:08:17.720936 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.722658 kubelet[3316]: E1124 00:08:17.722109 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.722658 kubelet[3316]: W1124 00:08:17.722125 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.722658 kubelet[3316]: E1124 00:08:17.722141 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.723556 kubelet[3316]: E1124 00:08:17.723338 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.723556 kubelet[3316]: W1124 00:08:17.723353 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.723556 kubelet[3316]: E1124 00:08:17.723368 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.724475 kubelet[3316]: E1124 00:08:17.724418 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.725821 kubelet[3316]: W1124 00:08:17.724555 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.725821 kubelet[3316]: E1124 00:08:17.725797 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.726555 kubelet[3316]: E1124 00:08:17.726461 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.726797 kubelet[3316]: W1124 00:08:17.726674 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.726797 kubelet[3316]: E1124 00:08:17.726697 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.732586 containerd[1972]: time="2025-11-24T00:08:17.732480251Z" level=info msg="connecting to shim c9cfe42132c84edad6590dace44f57327bbf82bce655ccf7752ed53527957e43" address="unix:///run/containerd/s/6ddd46d3aa177c3c923267e1b077989fa232368568831d08f2f37dcb4a5ac9e3" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:08:17.784965 systemd[1]: Started cri-containerd-c9cfe42132c84edad6590dace44f57327bbf82bce655ccf7752ed53527957e43.scope - libcontainer container c9cfe42132c84edad6590dace44f57327bbf82bce655ccf7752ed53527957e43. Nov 24 00:08:17.829081 kubelet[3316]: E1124 00:08:17.829045 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.829081 kubelet[3316]: W1124 00:08:17.829086 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.829346 kubelet[3316]: E1124 00:08:17.829113 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.829900 kubelet[3316]: E1124 00:08:17.829859 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.830019 kubelet[3316]: W1124 00:08:17.829907 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.830019 kubelet[3316]: E1124 00:08:17.829929 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.830935 kubelet[3316]: E1124 00:08:17.830596 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.830935 kubelet[3316]: W1124 00:08:17.830614 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.830935 kubelet[3316]: E1124 00:08:17.830631 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.833667 kubelet[3316]: E1124 00:08:17.831165 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.833667 kubelet[3316]: W1124 00:08:17.831179 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.833667 kubelet[3316]: E1124 00:08:17.831218 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.833667 kubelet[3316]: E1124 00:08:17.831529 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.833667 kubelet[3316]: W1124 00:08:17.831540 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.833667 kubelet[3316]: E1124 00:08:17.831552 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.833667 kubelet[3316]: E1124 00:08:17.831946 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.833667 kubelet[3316]: W1124 00:08:17.831960 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.833667 kubelet[3316]: E1124 00:08:17.831995 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.833667 kubelet[3316]: E1124 00:08:17.832357 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.834210 kubelet[3316]: W1124 00:08:17.832387 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.834210 kubelet[3316]: E1124 00:08:17.832401 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.834210 kubelet[3316]: E1124 00:08:17.832763 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.834210 kubelet[3316]: W1124 00:08:17.832774 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.834210 kubelet[3316]: E1124 00:08:17.832786 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.834210 kubelet[3316]: E1124 00:08:17.833099 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.834210 kubelet[3316]: W1124 00:08:17.833109 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.834210 kubelet[3316]: E1124 00:08:17.833122 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.834210 kubelet[3316]: E1124 00:08:17.833448 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.834210 kubelet[3316]: W1124 00:08:17.833479 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.837471 kubelet[3316]: E1124 00:08:17.833491 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.837471 kubelet[3316]: E1124 00:08:17.833921 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.837471 kubelet[3316]: W1124 00:08:17.833935 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.837471 kubelet[3316]: E1124 00:08:17.833951 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.837471 kubelet[3316]: E1124 00:08:17.834225 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.837471 kubelet[3316]: W1124 00:08:17.834244 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.837471 kubelet[3316]: E1124 00:08:17.834257 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.837471 kubelet[3316]: E1124 00:08:17.834521 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.837471 kubelet[3316]: W1124 00:08:17.834532 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.837471 kubelet[3316]: E1124 00:08:17.834551 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.837976 kubelet[3316]: E1124 00:08:17.834881 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.837976 kubelet[3316]: W1124 00:08:17.834892 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.837976 kubelet[3316]: E1124 00:08:17.834933 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.837976 kubelet[3316]: E1124 00:08:17.835205 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.837976 kubelet[3316]: W1124 00:08:17.835215 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.837976 kubelet[3316]: E1124 00:08:17.835236 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.837976 kubelet[3316]: E1124 00:08:17.835508 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.837976 kubelet[3316]: W1124 00:08:17.835519 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.837976 kubelet[3316]: E1124 00:08:17.835550 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.837976 kubelet[3316]: E1124 00:08:17.835852 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.838331 kubelet[3316]: W1124 00:08:17.835871 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.838331 kubelet[3316]: E1124 00:08:17.835883 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.838331 kubelet[3316]: E1124 00:08:17.836144 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.838331 kubelet[3316]: W1124 00:08:17.836153 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.838331 kubelet[3316]: E1124 00:08:17.836164 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.838331 kubelet[3316]: E1124 00:08:17.836354 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.838331 kubelet[3316]: W1124 00:08:17.836369 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.838331 kubelet[3316]: E1124 00:08:17.836379 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.838331 kubelet[3316]: E1124 00:08:17.837009 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.838331 kubelet[3316]: W1124 00:08:17.837022 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.839756 kubelet[3316]: E1124 00:08:17.837036 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.839756 kubelet[3316]: E1124 00:08:17.839001 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.839756 kubelet[3316]: W1124 00:08:17.839015 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.839756 kubelet[3316]: E1124 00:08:17.839033 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.839756 kubelet[3316]: E1124 00:08:17.839374 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.839756 kubelet[3316]: W1124 00:08:17.839386 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.839756 kubelet[3316]: E1124 00:08:17.839400 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.841308 kubelet[3316]: E1124 00:08:17.841280 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.841308 kubelet[3316]: W1124 00:08:17.841303 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.841539 kubelet[3316]: E1124 00:08:17.841324 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.846336 kubelet[3316]: E1124 00:08:17.846215 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.847108 kubelet[3316]: W1124 00:08:17.846438 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.847108 kubelet[3316]: E1124 00:08:17.846487 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.847816 kubelet[3316]: E1124 00:08:17.847773 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.848075 kubelet[3316]: W1124 00:08:17.848057 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.848325 kubelet[3316]: E1124 00:08:17.848181 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.863233 kubelet[3316]: E1124 00:08:17.863191 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:17.864833 kubelet[3316]: W1124 00:08:17.864713 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:17.864833 kubelet[3316]: E1124 00:08:17.864763 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:17.884414 containerd[1972]: time="2025-11-24T00:08:17.884350295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-659f7dd8c5-6tcn4,Uid:888a7466-9a34-485b-9c69-ab65509ab1cc,Namespace:calico-system,Attempt:0,} returns sandbox id \"72bd85101202d85495d67c59e949cd5f633daee2c78d0a5e707ed800ac9d3d65\"" Nov 24 00:08:17.891762 containerd[1972]: time="2025-11-24T00:08:17.891715343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 24 00:08:17.895596 containerd[1972]: time="2025-11-24T00:08:17.895460626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hktjj,Uid:d28df994-100d-4c16-a65b-d89fe73d9eed,Namespace:calico-system,Attempt:0,} returns sandbox id \"c9cfe42132c84edad6590dace44f57327bbf82bce655ccf7752ed53527957e43\"" Nov 24 00:08:19.237582 kubelet[3316]: E1124 00:08:19.237521 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-44qlh" podUID="a7f2741e-c2a8-4e97-9679-431279b978f1" Nov 24 00:08:19.387076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount853339388.mount: Deactivated successfully. Nov 24 00:08:20.398400 containerd[1972]: time="2025-11-24T00:08:20.398334803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:08:20.399621 containerd[1972]: time="2025-11-24T00:08:20.399252815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 24 00:08:20.401591 containerd[1972]: time="2025-11-24T00:08:20.400894550Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:08:20.403474 containerd[1972]: time="2025-11-24T00:08:20.403429399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:08:20.404394 containerd[1972]: time="2025-11-24T00:08:20.404356925Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.512570407s" Nov 24 00:08:20.404394 containerd[1972]: time="2025-11-24T00:08:20.404395363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 24 00:08:20.406235 containerd[1972]: time="2025-11-24T00:08:20.406190701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 24 00:08:20.436830 containerd[1972]: time="2025-11-24T00:08:20.436779974Z" level=info msg="CreateContainer within sandbox \"72bd85101202d85495d67c59e949cd5f633daee2c78d0a5e707ed800ac9d3d65\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 24 00:08:20.448585 containerd[1972]: time="2025-11-24T00:08:20.446807399Z" level=info msg="Container d46db05263f33b1ff2768daaff83f22ad7f2eec4a464dc4e3cbb09760598057a: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:08:20.455204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4121267930.mount: Deactivated successfully. Nov 24 00:08:20.459717 containerd[1972]: time="2025-11-24T00:08:20.459672821Z" level=info msg="CreateContainer within sandbox \"72bd85101202d85495d67c59e949cd5f633daee2c78d0a5e707ed800ac9d3d65\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d46db05263f33b1ff2768daaff83f22ad7f2eec4a464dc4e3cbb09760598057a\"" Nov 24 00:08:20.460854 containerd[1972]: time="2025-11-24T00:08:20.460772222Z" level=info msg="StartContainer for \"d46db05263f33b1ff2768daaff83f22ad7f2eec4a464dc4e3cbb09760598057a\"" Nov 24 00:08:20.464102 containerd[1972]: time="2025-11-24T00:08:20.463986733Z" level=info msg="connecting to shim d46db05263f33b1ff2768daaff83f22ad7f2eec4a464dc4e3cbb09760598057a" address="unix:///run/containerd/s/d5fedda7a6086b86f8a3bfccdf89ce1d808511816e7e75bb42d927e81f91e469" protocol=ttrpc version=3 Nov 24 00:08:20.541854 systemd[1]: Started cri-containerd-d46db05263f33b1ff2768daaff83f22ad7f2eec4a464dc4e3cbb09760598057a.scope - libcontainer container d46db05263f33b1ff2768daaff83f22ad7f2eec4a464dc4e3cbb09760598057a. Nov 24 00:08:20.642327 containerd[1972]: time="2025-11-24T00:08:20.642271875Z" level=info msg="StartContainer for \"d46db05263f33b1ff2768daaff83f22ad7f2eec4a464dc4e3cbb09760598057a\" returns successfully" Nov 24 00:08:21.236701 kubelet[3316]: E1124 00:08:21.236638 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-44qlh" podUID="a7f2741e-c2a8-4e97-9679-431279b978f1" Nov 24 00:08:21.629982 kubelet[3316]: I1124 00:08:21.629902 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-659f7dd8c5-6tcn4" podStartSLOduration=2.113174192 podStartE2EDuration="4.62986487s" podCreationTimestamp="2025-11-24 00:08:17 +0000 UTC" firstStartedPulling="2025-11-24 00:08:17.889155609 +0000 UTC m=+25.879178239" lastFinishedPulling="2025-11-24 00:08:20.405846304 +0000 UTC m=+28.395868917" observedRunningTime="2025-11-24 00:08:21.62928588 +0000 UTC m=+29.619308534" watchObservedRunningTime="2025-11-24 00:08:21.62986487 +0000 UTC m=+29.619887505" Nov 24 00:08:21.669775 kubelet[3316]: E1124 00:08:21.669521 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.669775 kubelet[3316]: W1124 00:08:21.669585 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.669775 kubelet[3316]: E1124 00:08:21.669618 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.670989 kubelet[3316]: E1124 00:08:21.670673 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.670989 kubelet[3316]: W1124 00:08:21.670695 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.670989 kubelet[3316]: E1124 00:08:21.670718 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.671521 kubelet[3316]: E1124 00:08:21.671490 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.671855 kubelet[3316]: W1124 00:08:21.671702 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.671855 kubelet[3316]: E1124 00:08:21.671733 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.672915 kubelet[3316]: E1124 00:08:21.672805 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.672915 kubelet[3316]: W1124 00:08:21.672820 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.672915 kubelet[3316]: E1124 00:08:21.672837 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.674289 kubelet[3316]: E1124 00:08:21.673522 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.674289 kubelet[3316]: W1124 00:08:21.673538 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.674620 kubelet[3316]: E1124 00:08:21.673555 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.675006 kubelet[3316]: E1124 00:08:21.674722 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.675006 kubelet[3316]: W1124 00:08:21.674743 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.675006 kubelet[3316]: E1124 00:08:21.674759 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.675584 kubelet[3316]: E1124 00:08:21.675366 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.675584 kubelet[3316]: W1124 00:08:21.675380 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.675584 kubelet[3316]: E1124 00:08:21.675394 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.676625 kubelet[3316]: E1124 00:08:21.675966 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.676625 kubelet[3316]: W1124 00:08:21.675981 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.676625 kubelet[3316]: E1124 00:08:21.675997 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.677159 kubelet[3316]: E1124 00:08:21.676950 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.677159 kubelet[3316]: W1124 00:08:21.676964 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.677159 kubelet[3316]: E1124 00:08:21.676976 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.678943 kubelet[3316]: E1124 00:08:21.678892 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.679265 kubelet[3316]: W1124 00:08:21.679075 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.679265 kubelet[3316]: E1124 00:08:21.679099 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.679911 kubelet[3316]: E1124 00:08:21.679896 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.680233 kubelet[3316]: W1124 00:08:21.680105 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.680233 kubelet[3316]: E1124 00:08:21.680125 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.680739 kubelet[3316]: E1124 00:08:21.680717 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.681046 kubelet[3316]: W1124 00:08:21.680896 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.681046 kubelet[3316]: E1124 00:08:21.680913 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.681492 kubelet[3316]: E1124 00:08:21.681445 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.681846 kubelet[3316]: W1124 00:08:21.681600 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.681846 kubelet[3316]: E1124 00:08:21.681616 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.682342 kubelet[3316]: E1124 00:08:21.682077 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.682342 kubelet[3316]: W1124 00:08:21.682092 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.682342 kubelet[3316]: E1124 00:08:21.682106 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.683130 kubelet[3316]: E1124 00:08:21.682902 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.683130 kubelet[3316]: W1124 00:08:21.682916 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.683328 kubelet[3316]: E1124 00:08:21.683252 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.765993 kubelet[3316]: E1124 00:08:21.765954 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.766341 kubelet[3316]: W1124 00:08:21.766083 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.766341 kubelet[3316]: E1124 00:08:21.766133 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.768502 kubelet[3316]: E1124 00:08:21.768442 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.768502 kubelet[3316]: W1124 00:08:21.768466 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.768838 kubelet[3316]: E1124 00:08:21.768595 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.769511 kubelet[3316]: E1124 00:08:21.769443 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.769511 kubelet[3316]: W1124 00:08:21.769467 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.770197 kubelet[3316]: E1124 00:08:21.769485 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.771808 kubelet[3316]: E1124 00:08:21.771738 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.772317 kubelet[3316]: W1124 00:08:21.772139 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.772317 kubelet[3316]: E1124 00:08:21.772163 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.773313 kubelet[3316]: E1124 00:08:21.773271 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.773313 kubelet[3316]: W1124 00:08:21.773288 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.774626 kubelet[3316]: E1124 00:08:21.773782 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.775367 kubelet[3316]: E1124 00:08:21.775297 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.775367 kubelet[3316]: W1124 00:08:21.775331 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.775367 kubelet[3316]: E1124 00:08:21.775349 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.776635 kubelet[3316]: E1124 00:08:21.776619 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.777595 kubelet[3316]: W1124 00:08:21.776772 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.777595 kubelet[3316]: E1124 00:08:21.776798 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.778041 kubelet[3316]: E1124 00:08:21.778027 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.778220 kubelet[3316]: W1124 00:08:21.778121 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.778220 kubelet[3316]: E1124 00:08:21.778142 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.778691 kubelet[3316]: E1124 00:08:21.778598 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.778691 kubelet[3316]: W1124 00:08:21.778612 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.778691 kubelet[3316]: E1124 00:08:21.778637 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.780037 kubelet[3316]: E1124 00:08:21.779993 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.780037 kubelet[3316]: W1124 00:08:21.780008 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.780037 kubelet[3316]: E1124 00:08:21.780022 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.781813 kubelet[3316]: E1124 00:08:21.781746 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.781813 kubelet[3316]: W1124 00:08:21.781779 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.781813 kubelet[3316]: E1124 00:08:21.781796 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.782975 kubelet[3316]: E1124 00:08:21.782781 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.782975 kubelet[3316]: W1124 00:08:21.782841 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.782975 kubelet[3316]: E1124 00:08:21.782858 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.785137 kubelet[3316]: E1124 00:08:21.785063 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.785137 kubelet[3316]: W1124 00:08:21.785083 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.785137 kubelet[3316]: E1124 00:08:21.785099 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.786743 kubelet[3316]: E1124 00:08:21.786590 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.786743 kubelet[3316]: W1124 00:08:21.786619 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.787144 kubelet[3316]: E1124 00:08:21.786637 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.787905 kubelet[3316]: E1124 00:08:21.787887 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.788072 kubelet[3316]: W1124 00:08:21.788001 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.788072 kubelet[3316]: E1124 00:08:21.788023 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.788878 kubelet[3316]: E1124 00:08:21.788864 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.789235 kubelet[3316]: W1124 00:08:21.789044 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.789235 kubelet[3316]: E1124 00:08:21.789065 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.790418 kubelet[3316]: E1124 00:08:21.789827 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.790418 kubelet[3316]: W1124 00:08:21.789925 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.790418 kubelet[3316]: E1124 00:08:21.789953 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.791106 kubelet[3316]: E1124 00:08:21.791093 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:08:21.791428 kubelet[3316]: W1124 00:08:21.791261 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:08:21.791428 kubelet[3316]: E1124 00:08:21.791282 3316 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:08:21.868166 containerd[1972]: time="2025-11-24T00:08:21.868110102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:08:21.870689 containerd[1972]: time="2025-11-24T00:08:21.870426596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 24 00:08:21.872777 containerd[1972]: time="2025-11-24T00:08:21.872668551Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:08:21.879640 containerd[1972]: time="2025-11-24T00:08:21.879146826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:08:21.882518 containerd[1972]: time="2025-11-24T00:08:21.882270458Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.476040118s" Nov 24 00:08:21.882518 containerd[1972]: time="2025-11-24T00:08:21.882319057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 24 00:08:21.892439 containerd[1972]: time="2025-11-24T00:08:21.892391554Z" level=info msg="CreateContainer within sandbox \"c9cfe42132c84edad6590dace44f57327bbf82bce655ccf7752ed53527957e43\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 24 00:08:21.914925 containerd[1972]: time="2025-11-24T00:08:21.914869702Z" level=info msg="Container 35cceb1aae6bbffa2f6ac9c2991b4a3e07df11608326eb33b48ffc7aea53e85b: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:08:21.926525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1841825881.mount: Deactivated successfully. Nov 24 00:08:21.943732 containerd[1972]: time="2025-11-24T00:08:21.942458580Z" level=info msg="CreateContainer within sandbox \"c9cfe42132c84edad6590dace44f57327bbf82bce655ccf7752ed53527957e43\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"35cceb1aae6bbffa2f6ac9c2991b4a3e07df11608326eb33b48ffc7aea53e85b\"" Nov 24 00:08:21.944663 containerd[1972]: time="2025-11-24T00:08:21.944624133Z" level=info msg="StartContainer for \"35cceb1aae6bbffa2f6ac9c2991b4a3e07df11608326eb33b48ffc7aea53e85b\"" Nov 24 00:08:21.949223 containerd[1972]: time="2025-11-24T00:08:21.949175159Z" level=info msg="connecting to shim 35cceb1aae6bbffa2f6ac9c2991b4a3e07df11608326eb33b48ffc7aea53e85b" address="unix:///run/containerd/s/6ddd46d3aa177c3c923267e1b077989fa232368568831d08f2f37dcb4a5ac9e3" protocol=ttrpc version=3 Nov 24 00:08:21.984855 systemd[1]: Started cri-containerd-35cceb1aae6bbffa2f6ac9c2991b4a3e07df11608326eb33b48ffc7aea53e85b.scope - libcontainer container 35cceb1aae6bbffa2f6ac9c2991b4a3e07df11608326eb33b48ffc7aea53e85b. Nov 24 00:08:22.077996 containerd[1972]: time="2025-11-24T00:08:22.077916033Z" level=info msg="StartContainer for \"35cceb1aae6bbffa2f6ac9c2991b4a3e07df11608326eb33b48ffc7aea53e85b\" returns successfully" Nov 24 00:08:22.098298 systemd[1]: cri-containerd-35cceb1aae6bbffa2f6ac9c2991b4a3e07df11608326eb33b48ffc7aea53e85b.scope: Deactivated successfully. Nov 24 00:08:22.131443 containerd[1972]: time="2025-11-24T00:08:22.131268204Z" level=info msg="received container exit event container_id:\"35cceb1aae6bbffa2f6ac9c2991b4a3e07df11608326eb33b48ffc7aea53e85b\" id:\"35cceb1aae6bbffa2f6ac9c2991b4a3e07df11608326eb33b48ffc7aea53e85b\" pid:4276 exited_at:{seconds:1763942902 nanos:105109503}" Nov 24 00:08:22.165681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35cceb1aae6bbffa2f6ac9c2991b4a3e07df11608326eb33b48ffc7aea53e85b-rootfs.mount: Deactivated successfully. Nov 24 00:08:22.614960 containerd[1972]: time="2025-11-24T00:08:22.614897226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 24 00:08:23.236661 kubelet[3316]: E1124 00:08:23.236505 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-44qlh" podUID="a7f2741e-c2a8-4e97-9679-431279b978f1" Nov 24 00:08:25.237041 kubelet[3316]: E1124 00:08:25.236974 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-44qlh" podUID="a7f2741e-c2a8-4e97-9679-431279b978f1" Nov 24 00:08:26.698101 containerd[1972]: time="2025-11-24T00:08:26.698019946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:08:26.713123 containerd[1972]: time="2025-11-24T00:08:26.713016263Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 24 00:08:26.716895 containerd[1972]: time="2025-11-24T00:08:26.716427764Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:08:26.720537 containerd[1972]: time="2025-11-24T00:08:26.720481999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:08:26.721582 containerd[1972]: time="2025-11-24T00:08:26.721526451Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.106560853s" Nov 24 00:08:26.721743 containerd[1972]: time="2025-11-24T00:08:26.721722037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 24 00:08:26.729297 containerd[1972]: time="2025-11-24T00:08:26.729216214Z" level=info msg="CreateContainer within sandbox \"c9cfe42132c84edad6590dace44f57327bbf82bce655ccf7752ed53527957e43\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 24 00:08:26.746967 containerd[1972]: time="2025-11-24T00:08:26.746919496Z" level=info msg="Container 5a97629fbbae8bff58e149e97baaa8c345755f8108394f35f9e4c21ae450b8be: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:08:26.752980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1429551473.mount: Deactivated successfully. Nov 24 00:08:26.766762 containerd[1972]: time="2025-11-24T00:08:26.766706029Z" level=info msg="CreateContainer within sandbox \"c9cfe42132c84edad6590dace44f57327bbf82bce655ccf7752ed53527957e43\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5a97629fbbae8bff58e149e97baaa8c345755f8108394f35f9e4c21ae450b8be\"" Nov 24 00:08:26.768334 containerd[1972]: time="2025-11-24T00:08:26.768262975Z" level=info msg="StartContainer for \"5a97629fbbae8bff58e149e97baaa8c345755f8108394f35f9e4c21ae450b8be\"" Nov 24 00:08:26.770746 containerd[1972]: time="2025-11-24T00:08:26.770695001Z" level=info msg="connecting to shim 5a97629fbbae8bff58e149e97baaa8c345755f8108394f35f9e4c21ae450b8be" address="unix:///run/containerd/s/6ddd46d3aa177c3c923267e1b077989fa232368568831d08f2f37dcb4a5ac9e3" protocol=ttrpc version=3 Nov 24 00:08:26.848075 systemd[1]: Started cri-containerd-5a97629fbbae8bff58e149e97baaa8c345755f8108394f35f9e4c21ae450b8be.scope - libcontainer container 5a97629fbbae8bff58e149e97baaa8c345755f8108394f35f9e4c21ae450b8be. Nov 24 00:08:27.074998 containerd[1972]: time="2025-11-24T00:08:27.074873503Z" level=info msg="StartContainer for \"5a97629fbbae8bff58e149e97baaa8c345755f8108394f35f9e4c21ae450b8be\" returns successfully" Nov 24 00:08:27.236703 kubelet[3316]: E1124 00:08:27.236638 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-44qlh" podUID="a7f2741e-c2a8-4e97-9679-431279b978f1" Nov 24 00:08:27.890970 systemd[1]: cri-containerd-5a97629fbbae8bff58e149e97baaa8c345755f8108394f35f9e4c21ae450b8be.scope: Deactivated successfully. Nov 24 00:08:27.891721 systemd[1]: cri-containerd-5a97629fbbae8bff58e149e97baaa8c345755f8108394f35f9e4c21ae450b8be.scope: Consumed 702ms CPU time, 163M memory peak, 7.1M read from disk, 171.3M written to disk. Nov 24 00:08:27.895376 containerd[1972]: time="2025-11-24T00:08:27.895336981Z" level=info msg="received container exit event container_id:\"5a97629fbbae8bff58e149e97baaa8c345755f8108394f35f9e4c21ae450b8be\" id:\"5a97629fbbae8bff58e149e97baaa8c345755f8108394f35f9e4c21ae450b8be\" pid:4338 exited_at:{seconds:1763942907 nanos:890558371}" Nov 24 00:08:27.951014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a97629fbbae8bff58e149e97baaa8c345755f8108394f35f9e4c21ae450b8be-rootfs.mount: Deactivated successfully. Nov 24 00:08:27.986389 kubelet[3316]: I1124 00:08:27.986362 3316 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 24 00:08:28.045940 systemd[1]: Created slice kubepods-burstable-pod348cf778_5f5c_4d14_8753_45e0fb5f1d98.slice - libcontainer container kubepods-burstable-pod348cf778_5f5c_4d14_8753_45e0fb5f1d98.slice. Nov 24 00:08:28.091120 systemd[1]: Created slice kubepods-besteffort-pod1ec8e9c1_4321_4965_b5fd_6e54d9442fa1.slice - libcontainer container kubepods-besteffort-pod1ec8e9c1_4321_4965_b5fd_6e54d9442fa1.slice. Nov 24 00:08:28.109213 systemd[1]: Created slice kubepods-burstable-pod1987144d_d184_44c0_92fb_e90e141fbcf8.slice - libcontainer container kubepods-burstable-pod1987144d_d184_44c0_92fb_e90e141fbcf8.slice. Nov 24 00:08:28.126592 systemd[1]: Created slice kubepods-besteffort-podfe389aaa_291c_4fa0_a06f_e4820906cbf6.slice - libcontainer container kubepods-besteffort-podfe389aaa_291c_4fa0_a06f_e4820906cbf6.slice. Nov 24 00:08:28.132482 kubelet[3316]: I1124 00:08:28.132438 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z5n4\" (UniqueName: \"kubernetes.io/projected/348cf778-5f5c-4d14-8753-45e0fb5f1d98-kube-api-access-6z5n4\") pod \"coredns-674b8bbfcf-bqwch\" (UID: \"348cf778-5f5c-4d14-8753-45e0fb5f1d98\") " pod="kube-system/coredns-674b8bbfcf-bqwch" Nov 24 00:08:28.132703 kubelet[3316]: I1124 00:08:28.132490 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/348cf778-5f5c-4d14-8753-45e0fb5f1d98-config-volume\") pod \"coredns-674b8bbfcf-bqwch\" (UID: \"348cf778-5f5c-4d14-8753-45e0fb5f1d98\") " pod="kube-system/coredns-674b8bbfcf-bqwch" Nov 24 00:08:28.140320 systemd[1]: Created slice kubepods-besteffort-pod18d54b97_5424_4119_892c_ebd148db0571.slice - libcontainer container kubepods-besteffort-pod18d54b97_5424_4119_892c_ebd148db0571.slice. Nov 24 00:08:28.162384 systemd[1]: Created slice kubepods-besteffort-pod63a82b4c_a5db_46d5_9bde_8b4be9966835.slice - libcontainer container kubepods-besteffort-pod63a82b4c_a5db_46d5_9bde_8b4be9966835.slice. Nov 24 00:08:28.173053 systemd[1]: Created slice kubepods-besteffort-pod293f9213_9ce6_465e_8d91_13e61a8f35a0.slice - libcontainer container kubepods-besteffort-pod293f9213_9ce6_465e_8d91_13e61a8f35a0.slice. Nov 24 00:08:28.232797 kubelet[3316]: I1124 00:08:28.232734 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18d54b97-5424-4119-892c-ebd148db0571-goldmane-ca-bundle\") pod \"goldmane-666569f655-jqnrx\" (UID: \"18d54b97-5424-4119-892c-ebd148db0571\") " pod="calico-system/goldmane-666569f655-jqnrx" Nov 24 00:08:28.233121 kubelet[3316]: I1124 00:08:28.232804 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n9pk\" (UniqueName: \"kubernetes.io/projected/fe389aaa-291c-4fa0-a06f-e4820906cbf6-kube-api-access-5n9pk\") pod \"calico-apiserver-7cc86c6ddc-jtm28\" (UID: \"fe389aaa-291c-4fa0-a06f-e4820906cbf6\") " pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jtm28" Nov 24 00:08:28.233121 kubelet[3316]: I1124 00:08:28.232848 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/63a82b4c-a5db-46d5-9bde-8b4be9966835-calico-apiserver-certs\") pod \"calico-apiserver-7cc86c6ddc-jzjm6\" (UID: \"63a82b4c-a5db-46d5-9bde-8b4be9966835\") " pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jzjm6" Nov 24 00:08:28.233121 kubelet[3316]: I1124 00:08:28.232873 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx9h5\" (UniqueName: \"kubernetes.io/projected/63a82b4c-a5db-46d5-9bde-8b4be9966835-kube-api-access-cx9h5\") pod \"calico-apiserver-7cc86c6ddc-jzjm6\" (UID: \"63a82b4c-a5db-46d5-9bde-8b4be9966835\") " pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jzjm6" Nov 24 00:08:28.233121 kubelet[3316]: I1124 00:08:28.232899 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ec8e9c1-4321-4965-b5fd-6e54d9442fa1-whisker-ca-bundle\") pod \"whisker-9f77cb448-9qm4m\" (UID: \"1ec8e9c1-4321-4965-b5fd-6e54d9442fa1\") " pod="calico-system/whisker-9f77cb448-9qm4m" Nov 24 00:08:28.233121 kubelet[3316]: I1124 00:08:28.232922 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbg4p\" (UniqueName: \"kubernetes.io/projected/1ec8e9c1-4321-4965-b5fd-6e54d9442fa1-kube-api-access-gbg4p\") pod \"whisker-9f77cb448-9qm4m\" (UID: \"1ec8e9c1-4321-4965-b5fd-6e54d9442fa1\") " pod="calico-system/whisker-9f77cb448-9qm4m" Nov 24 00:08:28.233792 kubelet[3316]: I1124 00:08:28.232947 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqp8j\" (UniqueName: \"kubernetes.io/projected/293f9213-9ce6-465e-8d91-13e61a8f35a0-kube-api-access-hqp8j\") pod \"calico-kube-controllers-857d84d84d-ncvx2\" (UID: \"293f9213-9ce6-465e-8d91-13e61a8f35a0\") " pod="calico-system/calico-kube-controllers-857d84d84d-ncvx2" Nov 24 00:08:28.233792 kubelet[3316]: I1124 00:08:28.233000 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fe389aaa-291c-4fa0-a06f-e4820906cbf6-calico-apiserver-certs\") pod \"calico-apiserver-7cc86c6ddc-jtm28\" (UID: \"fe389aaa-291c-4fa0-a06f-e4820906cbf6\") " pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jtm28" Nov 24 00:08:28.233792 kubelet[3316]: I1124 00:08:28.233026 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9s7g\" (UniqueName: \"kubernetes.io/projected/18d54b97-5424-4119-892c-ebd148db0571-kube-api-access-x9s7g\") pod \"goldmane-666569f655-jqnrx\" (UID: \"18d54b97-5424-4119-892c-ebd148db0571\") " pod="calico-system/goldmane-666569f655-jqnrx" Nov 24 00:08:28.233792 kubelet[3316]: I1124 00:08:28.233054 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1987144d-d184-44c0-92fb-e90e141fbcf8-config-volume\") pod \"coredns-674b8bbfcf-624q9\" (UID: \"1987144d-d184-44c0-92fb-e90e141fbcf8\") " pod="kube-system/coredns-674b8bbfcf-624q9" Nov 24 00:08:28.233792 kubelet[3316]: I1124 00:08:28.233079 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwf8k\" (UniqueName: \"kubernetes.io/projected/1987144d-d184-44c0-92fb-e90e141fbcf8-kube-api-access-xwf8k\") pod \"coredns-674b8bbfcf-624q9\" (UID: \"1987144d-d184-44c0-92fb-e90e141fbcf8\") " pod="kube-system/coredns-674b8bbfcf-624q9" Nov 24 00:08:28.234481 kubelet[3316]: I1124 00:08:28.233103 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18d54b97-5424-4119-892c-ebd148db0571-config\") pod \"goldmane-666569f655-jqnrx\" (UID: \"18d54b97-5424-4119-892c-ebd148db0571\") " pod="calico-system/goldmane-666569f655-jqnrx" Nov 24 00:08:28.234481 kubelet[3316]: I1124 00:08:28.233128 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/18d54b97-5424-4119-892c-ebd148db0571-goldmane-key-pair\") pod \"goldmane-666569f655-jqnrx\" (UID: \"18d54b97-5424-4119-892c-ebd148db0571\") " pod="calico-system/goldmane-666569f655-jqnrx" Nov 24 00:08:28.234481 kubelet[3316]: I1124 00:08:28.233169 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1ec8e9c1-4321-4965-b5fd-6e54d9442fa1-whisker-backend-key-pair\") pod \"whisker-9f77cb448-9qm4m\" (UID: \"1ec8e9c1-4321-4965-b5fd-6e54d9442fa1\") " pod="calico-system/whisker-9f77cb448-9qm4m" Nov 24 00:08:28.234481 kubelet[3316]: I1124 00:08:28.233195 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/293f9213-9ce6-465e-8d91-13e61a8f35a0-tigera-ca-bundle\") pod \"calico-kube-controllers-857d84d84d-ncvx2\" (UID: \"293f9213-9ce6-465e-8d91-13e61a8f35a0\") " pod="calico-system/calico-kube-controllers-857d84d84d-ncvx2" Nov 24 00:08:28.372004 containerd[1972]: time="2025-11-24T00:08:28.371937885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bqwch,Uid:348cf778-5f5c-4d14-8753-45e0fb5f1d98,Namespace:kube-system,Attempt:0,}" Nov 24 00:08:28.448539 containerd[1972]: time="2025-11-24T00:08:28.447861648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc86c6ddc-jtm28,Uid:fe389aaa-291c-4fa0-a06f-e4820906cbf6,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:08:28.458682 containerd[1972]: time="2025-11-24T00:08:28.458619403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jqnrx,Uid:18d54b97-5424-4119-892c-ebd148db0571,Namespace:calico-system,Attempt:0,}" Nov 24 00:08:28.479147 containerd[1972]: time="2025-11-24T00:08:28.479093932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc86c6ddc-jzjm6,Uid:63a82b4c-a5db-46d5-9bde-8b4be9966835,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:08:28.480496 containerd[1972]: time="2025-11-24T00:08:28.480126717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-857d84d84d-ncvx2,Uid:293f9213-9ce6-465e-8d91-13e61a8f35a0,Namespace:calico-system,Attempt:0,}" Nov 24 00:08:28.710910 containerd[1972]: time="2025-11-24T00:08:28.710675165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9f77cb448-9qm4m,Uid:1ec8e9c1-4321-4965-b5fd-6e54d9442fa1,Namespace:calico-system,Attempt:0,}" Nov 24 00:08:28.717115 containerd[1972]: time="2025-11-24T00:08:28.717066490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 24 00:08:28.725973 containerd[1972]: time="2025-11-24T00:08:28.725937471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-624q9,Uid:1987144d-d184-44c0-92fb-e90e141fbcf8,Namespace:kube-system,Attempt:0,}" Nov 24 00:08:28.877411 containerd[1972]: time="2025-11-24T00:08:28.877336215Z" level=error msg="Failed to destroy network for sandbox \"b5c9b34f29b231d48d3590bbfcfd89ebedcbb1f3d4cbefbd2146cfd010a63eec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:08:28.882349 containerd[1972]: time="2025-11-24T00:08:28.882261351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc86c6ddc-jzjm6,Uid:63a82b4c-a5db-46d5-9bde-8b4be9966835,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5c9b34f29b231d48d3590bbfcfd89ebedcbb1f3d4cbefbd2146cfd010a63eec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:08:28.883220 kubelet[3316]: E1124 00:08:28.883165 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5c9b34f29b231d48d3590bbfcfd89ebedcbb1f3d4cbefbd2146cfd010a63eec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:08:28.884432 kubelet[3316]: E1124 00:08:28.883654 3316 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5c9b34f29b231d48d3590bbfcfd89ebedcbb1f3d4cbefbd2146cfd010a63eec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jzjm6" Nov 24 00:08:28.884432 kubelet[3316]: E1124 00:08:28.883705 3316 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5c9b34f29b231d48d3590bbfcfd89ebedcbb1f3d4cbefbd2146cfd010a63eec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jzjm6" Nov 24 00:08:28.887623 kubelet[3316]: E1124 00:08:28.887491 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cc86c6ddc-jzjm6_calico-apiserver(63a82b4c-a5db-46d5-9bde-8b4be9966835)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cc86c6ddc-jzjm6_calico-apiserver(63a82b4c-a5db-46d5-9bde-8b4be9966835)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5c9b34f29b231d48d3590bbfcfd89ebedcbb1f3d4cbefbd2146cfd010a63eec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jzjm6" podUID="63a82b4c-a5db-46d5-9bde-8b4be9966835" Nov 24 00:08:28.894595 containerd[1972]: time="2025-11-24T00:08:28.894471888Z" level=error msg="Failed to destroy network for sandbox \"4ec66f712da6efbc12cca9a7f651605037100d7818ff66395ea34cedd110480d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:08:28.896582 containerd[1972]: time="2025-11-24T00:08:28.895958052Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jqnrx,Uid:18d54b97-5424-4119-892c-ebd148db0571,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ec66f712da6efbc12cca9a7f651605037100d7818ff66395ea34cedd110480d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:08:28.914528 containerd[1972]: time="2025-11-24T00:08:28.913305136Z" level=error msg="Failed to destroy network for sandbox \"8e44418a71a5ca9fd0c5d8804b481b45a360b02f7382fc0ca578d678f248a10c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:08:28.914807 kubelet[3316]: E1124 00:08:28.914214 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ec66f712da6efbc12cca9a7f651605037100d7818ff66395ea34cedd110480d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:08:28.914807 kubelet[3316]: E1124 00:08:28.914311 3316 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ec66f712da6efbc12cca9a7f651605037100d7818ff66395ea34cedd110480d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-jqnrx" Nov 24 00:08:28.914807 kubelet[3316]: E1124 00:08:28.914355 3316 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ec66f712da6efbc12cca9a7f651605037100d7818ff66395ea34cedd110480d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-jqnrx" Nov 24 00:08:28.914967 kubelet[3316]: E1124 00:08:28.914451 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-jqnrx_calico-system(18d54b97-5424-4119-892c-ebd148db0571)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-jqnrx_calico-system(18d54b97-5424-4119-892c-ebd148db0571)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ec66f712da6efbc12cca9a7f651605037100d7818ff66395ea34cedd110480d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-jqnrx" podUID="18d54b97-5424-4119-892c-ebd148db0571" Nov 24 00:08:28.917460 containerd[1972]: time="2025-11-24T00:08:28.916940374Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-857d84d84d-ncvx2,Uid:293f9213-9ce6-465e-8d91-13e61a8f35a0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e44418a71a5ca9fd0c5d8804b481b45a360b02f7382fc0ca578d678f248a10c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:08:28.917460 containerd[1972]: time="2025-11-24T00:08:28.917187299Z" level=error msg="Failed to destroy network for sandbox \"2537aa71ea5e3cc95411c6f725697c6ca8078c24dc8dab5e91fa4161510d20e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:08:28.919040 containerd[1972]: time="2025-11-24T00:08:28.918924813Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc86c6ddc-jtm28,Uid:fe389aaa-291c-4fa0-a06f-e4820906cbf6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2537aa71ea5e3cc95411c6f725697c6ca8078c24dc8dab5e91fa4161510d20e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:08:28.919314 kubelet[3316]: E1124 00:08:28.919221 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2537aa71ea5e3cc95411c6f725697c6ca8078c24dc8dab5e91fa4161510d20e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:08:28.919314 kubelet[3316]: E1124 00:08:28.919292 3316 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2537aa71ea5e3cc95411c6f725697c6ca8078c24dc8dab5e91fa4161510d20e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jtm28" Nov 24 00:08:28.919610 kubelet[3316]: E1124 00:08:28.919328 3316 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2537aa71ea5e3cc95411c6f725697c6ca8078c24dc8dab5e91fa4161510d20e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jtm28" Nov 24 00:08:28.919610 kubelet[3316]: E1124 00:08:28.919408 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cc86c6ddc-jtm28_calico-apiserver(fe389aaa-291c-4fa0-a06f-e4820906cbf6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cc86c6ddc-jtm28_calico-apiserver(fe389aaa-291c-4fa0-a06f-e4820906cbf6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2537aa71ea5e3cc95411c6f725697c6ca8078c24dc8dab5e91fa4161510d20e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jtm28" podUID="fe389aaa-291c-4fa0-a06f-e4820906cbf6" Nov 24 00:08:28.919610 kubelet[3316]: E1124 00:08:28.919474 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e44418a71a5ca9fd0c5d8804b481b45a360b02f7382fc0ca578d678f248a10c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:08:28.919916 kubelet[3316]: E1124 00:08:28.919504 3316 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e44418a71a5ca9fd0c5d8804b481b45a360b02f7382fc0ca578d678f248a10c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-857d84d84d-ncvx2" Nov 24 00:08:28.919916 kubelet[3316]: E1124 00:08:28.919522 3316 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e44418a71a5ca9fd0c5d8804b481b45a360b02f7382fc0ca578d678f248a10c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-857d84d84d-ncvx2" Nov 24 00:08:28.919916 kubelet[3316]: E1124 00:08:28.919587 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-857d84d84d-ncvx2_calico-system(293f9213-9ce6-465e-8d91-13e61a8f35a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-857d84d84d-ncvx2_calico-system(293f9213-9ce6-465e-8d91-13e61a8f35a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e44418a71a5ca9fd0c5d8804b481b45a360b02f7382fc0ca578d678f248a10c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-857d84d84d-ncvx2" podUID="293f9213-9ce6-465e-8d91-13e61a8f35a0" Nov 24 00:08:28.932148 containerd[1972]: time="2025-11-24T00:08:28.932089942Z" level=error msg="Failed to destroy network for sandbox \"26ff505ed446445dcb7fd8cb6b328aa8f64eec61f2e0f7c18f7a949cf90562f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:08:28.934026 containerd[1972]: time="2025-11-24T00:08:28.933962969Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bqwch,Uid:348cf778-5f5c-4d14-8753-45e0fb5f1d98,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"26ff505ed446445dcb7fd8cb6b328aa8f64eec61f2e0f7c18f7a949cf90562f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:08:28.937040 kubelet[3316]: E1124 00:08:28.936925 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26ff505ed446445dcb7fd8cb6b328aa8f64eec61f2e0f7c18f7a949cf90562f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:08:28.937211 kubelet[3316]: E1124 00:08:28.937140 3316 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26ff505ed446445dcb7fd8cb6b328aa8f64eec61f2e0f7c18f7a949cf90562f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bqwch" Nov 24 00:08:28.937211 kubelet[3316]: E1124 00:08:28.937176 3316 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26ff505ed446445dcb7fd8cb6b328aa8f64eec61f2e0f7c18f7a949cf90562f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bqwch" Nov 24 00:08:28.938151 kubelet[3316]: E1124 00:08:28.938079 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bqwch_kube-system(348cf778-5f5c-4d14-8753-45e0fb5f1d98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bqwch_kube-system(348cf778-5f5c-4d14-8753-45e0fb5f1d98)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26ff505ed446445dcb7fd8cb6b328aa8f64eec61f2e0f7c18f7a949cf90562f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bqwch" podUID="348cf778-5f5c-4d14-8753-45e0fb5f1d98" Nov 24 00:08:29.003921 containerd[1972]: time="2025-11-24T00:08:29.000717913Z" level=error msg="Failed to destroy network for sandbox \"8013c3ce857b098e850e3bf4f85ec787b9ac7e48f9c414a8be0688a7ce01f49e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:08:29.005435 systemd[1]: run-netns-cni\x2de2d45377\x2d8cdc\x2daccf\x2dbe2e\x2d084f25905c95.mount: Deactivated successfully. Nov 24 00:08:29.009677 containerd[1972]: time="2025-11-24T00:08:29.009288445Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9f77cb448-9qm4m,Uid:1ec8e9c1-4321-4965-b5fd-6e54d9442fa1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8013c3ce857b098e850e3bf4f85ec787b9ac7e48f9c414a8be0688a7ce01f49e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:08:29.011643 kubelet[3316]: E1124 00:08:29.011108 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8013c3ce857b098e850e3bf4f85ec787b9ac7e48f9c414a8be0688a7ce01f49e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:08:29.011777 kubelet[3316]: E1124 00:08:29.011677 3316 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8013c3ce857b098e850e3bf4f85ec787b9ac7e48f9c414a8be0688a7ce01f49e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9f77cb448-9qm4m" Nov 24 00:08:29.011777 kubelet[3316]: E1124 00:08:29.011710 3316 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8013c3ce857b098e850e3bf4f85ec787b9ac7e48f9c414a8be0688a7ce01f49e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9f77cb448-9qm4m" Nov 24 00:08:29.012240 kubelet[3316]: E1124 00:08:29.011818 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-9f77cb448-9qm4m_calico-system(1ec8e9c1-4321-4965-b5fd-6e54d9442fa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-9f77cb448-9qm4m_calico-system(1ec8e9c1-4321-4965-b5fd-6e54d9442fa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8013c3ce857b098e850e3bf4f85ec787b9ac7e48f9c414a8be0688a7ce01f49e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-9f77cb448-9qm4m" podUID="1ec8e9c1-4321-4965-b5fd-6e54d9442fa1" Nov 24 00:08:29.016510 containerd[1972]: time="2025-11-24T00:08:29.016463200Z" level=error msg="Failed to destroy network for sandbox \"93c6d7fe491f72c4d742e7d5297d47c4ce744b200c3a21b1b6e4bae5e8d7a741\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:08:29.022145 systemd[1]: run-netns-cni\x2d1486c15e\x2d7641\x2dd714\x2d1b71\x2d457aeb16a58c.mount: Deactivated successfully. Nov 24 00:08:29.023556 containerd[1972]: time="2025-11-24T00:08:29.022937038Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-624q9,Uid:1987144d-d184-44c0-92fb-e90e141fbcf8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"93c6d7fe491f72c4d742e7d5297d47c4ce744b200c3a21b1b6e4bae5e8d7a741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:08:29.024059 kubelet[3316]: E1124 00:08:29.024006 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93c6d7fe491f72c4d742e7d5297d47c4ce744b200c3a21b1b6e4bae5e8d7a741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:08:29.024165 kubelet[3316]: E1124 00:08:29.024090 3316 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93c6d7fe491f72c4d742e7d5297d47c4ce744b200c3a21b1b6e4bae5e8d7a741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-624q9" Nov 24 00:08:29.024165 kubelet[3316]: E1124 00:08:29.024122 3316 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93c6d7fe491f72c4d742e7d5297d47c4ce744b200c3a21b1b6e4bae5e8d7a741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-624q9" Nov 24 00:08:29.024557 kubelet[3316]: E1124 00:08:29.024226 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-624q9_kube-system(1987144d-d184-44c0-92fb-e90e141fbcf8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-624q9_kube-system(1987144d-d184-44c0-92fb-e90e141fbcf8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93c6d7fe491f72c4d742e7d5297d47c4ce744b200c3a21b1b6e4bae5e8d7a741\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-624q9" podUID="1987144d-d184-44c0-92fb-e90e141fbcf8" Nov 24 00:08:29.242990 systemd[1]: Created slice kubepods-besteffort-poda7f2741e_c2a8_4e97_9679_431279b978f1.slice - libcontainer container kubepods-besteffort-poda7f2741e_c2a8_4e97_9679_431279b978f1.slice. Nov 24 00:08:29.246475 containerd[1972]: time="2025-11-24T00:08:29.246424587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-44qlh,Uid:a7f2741e-c2a8-4e97-9679-431279b978f1,Namespace:calico-system,Attempt:0,}" Nov 24 00:08:29.317299 containerd[1972]: time="2025-11-24T00:08:29.316620767Z" level=error msg="Failed to destroy network for sandbox \"eab8b619d1cf3f144dac2173f9411076c47e22b09f1067d1498aca9cbcb224d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:08:29.318113 containerd[1972]: time="2025-11-24T00:08:29.318054977Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-44qlh,Uid:a7f2741e-c2a8-4e97-9679-431279b978f1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eab8b619d1cf3f144dac2173f9411076c47e22b09f1067d1498aca9cbcb224d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:08:29.319913 kubelet[3316]: E1124 00:08:29.319671 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eab8b619d1cf3f144dac2173f9411076c47e22b09f1067d1498aca9cbcb224d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:08:29.320044 kubelet[3316]: E1124 00:08:29.319972 3316 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eab8b619d1cf3f144dac2173f9411076c47e22b09f1067d1498aca9cbcb224d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-44qlh" Nov 24 00:08:29.320105 kubelet[3316]: E1124 00:08:29.320042 3316 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eab8b619d1cf3f144dac2173f9411076c47e22b09f1067d1498aca9cbcb224d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-44qlh" Nov 24 00:08:29.320624 kubelet[3316]: E1124 00:08:29.320143 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-44qlh_calico-system(a7f2741e-c2a8-4e97-9679-431279b978f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-44qlh_calico-system(a7f2741e-c2a8-4e97-9679-431279b978f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eab8b619d1cf3f144dac2173f9411076c47e22b09f1067d1498aca9cbcb224d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-44qlh" podUID="a7f2741e-c2a8-4e97-9679-431279b978f1" Nov 24 00:08:29.323642 systemd[1]: run-netns-cni\x2df25aa4ec\x2df632\x2db86a\x2d5660\x2d9fac03bf4abf.mount: Deactivated successfully. Nov 24 00:08:35.469221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount108228088.mount: Deactivated successfully. Nov 24 00:08:35.556401 containerd[1972]: time="2025-11-24T00:08:35.556333400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:08:35.561193 containerd[1972]: time="2025-11-24T00:08:35.560106234Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:08:35.572531 containerd[1972]: time="2025-11-24T00:08:35.572103263Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:08:35.572823 containerd[1972]: time="2025-11-24T00:08:35.572753174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 24 00:08:35.584269 containerd[1972]: time="2025-11-24T00:08:35.584199181Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.855251884s" Nov 24 00:08:35.584650 containerd[1972]: time="2025-11-24T00:08:35.584487117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 24 00:08:35.651968 containerd[1972]: time="2025-11-24T00:08:35.651471950Z" level=info msg="CreateContainer within sandbox \"c9cfe42132c84edad6590dace44f57327bbf82bce655ccf7752ed53527957e43\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 24 00:08:35.735159 containerd[1972]: time="2025-11-24T00:08:35.734732002Z" level=info msg="Container 34de7bfaaf2f45e47e0e726a22281c41b6752832457bea2f6f21f163df947e5d: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:08:35.737768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1424162937.mount: Deactivated successfully. Nov 24 00:08:35.798176 containerd[1972]: time="2025-11-24T00:08:35.798116518Z" level=info msg="CreateContainer within sandbox \"c9cfe42132c84edad6590dace44f57327bbf82bce655ccf7752ed53527957e43\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"34de7bfaaf2f45e47e0e726a22281c41b6752832457bea2f6f21f163df947e5d\"" Nov 24 00:08:35.798996 containerd[1972]: time="2025-11-24T00:08:35.798950086Z" level=info msg="StartContainer for \"34de7bfaaf2f45e47e0e726a22281c41b6752832457bea2f6f21f163df947e5d\"" Nov 24 00:08:35.811998 containerd[1972]: time="2025-11-24T00:08:35.811938700Z" level=info msg="connecting to shim 34de7bfaaf2f45e47e0e726a22281c41b6752832457bea2f6f21f163df947e5d" address="unix:///run/containerd/s/6ddd46d3aa177c3c923267e1b077989fa232368568831d08f2f37dcb4a5ac9e3" protocol=ttrpc version=3 Nov 24 00:08:36.005884 systemd[1]: Started cri-containerd-34de7bfaaf2f45e47e0e726a22281c41b6752832457bea2f6f21f163df947e5d.scope - libcontainer container 34de7bfaaf2f45e47e0e726a22281c41b6752832457bea2f6f21f163df947e5d. Nov 24 00:08:36.125261 containerd[1972]: time="2025-11-24T00:08:36.125203231Z" level=info msg="StartContainer for \"34de7bfaaf2f45e47e0e726a22281c41b6752832457bea2f6f21f163df947e5d\" returns successfully" Nov 24 00:08:36.314515 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 24 00:08:36.316760 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 24 00:08:37.030669 kubelet[3316]: I1124 00:08:37.030320 3316 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1ec8e9c1-4321-4965-b5fd-6e54d9442fa1-whisker-backend-key-pair\") pod \"1ec8e9c1-4321-4965-b5fd-6e54d9442fa1\" (UID: \"1ec8e9c1-4321-4965-b5fd-6e54d9442fa1\") " Nov 24 00:08:37.030669 kubelet[3316]: I1124 00:08:37.030453 3316 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ec8e9c1-4321-4965-b5fd-6e54d9442fa1-whisker-ca-bundle\") pod \"1ec8e9c1-4321-4965-b5fd-6e54d9442fa1\" (UID: \"1ec8e9c1-4321-4965-b5fd-6e54d9442fa1\") " Nov 24 00:08:37.030669 kubelet[3316]: I1124 00:08:37.030482 3316 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbg4p\" (UniqueName: \"kubernetes.io/projected/1ec8e9c1-4321-4965-b5fd-6e54d9442fa1-kube-api-access-gbg4p\") pod \"1ec8e9c1-4321-4965-b5fd-6e54d9442fa1\" (UID: \"1ec8e9c1-4321-4965-b5fd-6e54d9442fa1\") " Nov 24 00:08:37.043609 kubelet[3316]: I1124 00:08:37.039486 3316 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ec8e9c1-4321-4965-b5fd-6e54d9442fa1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1ec8e9c1-4321-4965-b5fd-6e54d9442fa1" (UID: "1ec8e9c1-4321-4965-b5fd-6e54d9442fa1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 24 00:08:37.053432 systemd[1]: var-lib-kubelet-pods-1ec8e9c1\x2d4321\x2d4965\x2db5fd\x2d6e54d9442fa1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgbg4p.mount: Deactivated successfully. Nov 24 00:08:37.055549 kubelet[3316]: I1124 00:08:37.053914 3316 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ec8e9c1-4321-4965-b5fd-6e54d9442fa1-kube-api-access-gbg4p" (OuterVolumeSpecName: "kube-api-access-gbg4p") pod "1ec8e9c1-4321-4965-b5fd-6e54d9442fa1" (UID: "1ec8e9c1-4321-4965-b5fd-6e54d9442fa1"). InnerVolumeSpecName "kube-api-access-gbg4p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 24 00:08:37.056719 kubelet[3316]: I1124 00:08:37.056675 3316 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ec8e9c1-4321-4965-b5fd-6e54d9442fa1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1ec8e9c1-4321-4965-b5fd-6e54d9442fa1" (UID: "1ec8e9c1-4321-4965-b5fd-6e54d9442fa1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 24 00:08:37.058988 systemd[1]: var-lib-kubelet-pods-1ec8e9c1\x2d4321\x2d4965\x2db5fd\x2d6e54d9442fa1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 24 00:08:37.132002 kubelet[3316]: I1124 00:08:37.131656 3316 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ec8e9c1-4321-4965-b5fd-6e54d9442fa1-whisker-ca-bundle\") on node \"ip-172-31-16-87\" DevicePath \"\"" Nov 24 00:08:37.132002 kubelet[3316]: I1124 00:08:37.131943 3316 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gbg4p\" (UniqueName: \"kubernetes.io/projected/1ec8e9c1-4321-4965-b5fd-6e54d9442fa1-kube-api-access-gbg4p\") on node \"ip-172-31-16-87\" DevicePath \"\"" Nov 24 00:08:37.132002 kubelet[3316]: I1124 00:08:37.131962 3316 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1ec8e9c1-4321-4965-b5fd-6e54d9442fa1-whisker-backend-key-pair\") on node \"ip-172-31-16-87\" DevicePath \"\"" Nov 24 00:08:37.762028 kubelet[3316]: I1124 00:08:37.761985 3316 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:08:37.769774 systemd[1]: Removed slice kubepods-besteffort-pod1ec8e9c1_4321_4965_b5fd_6e54d9442fa1.slice - libcontainer container kubepods-besteffort-pod1ec8e9c1_4321_4965_b5fd_6e54d9442fa1.slice. Nov 24 00:08:37.795915 kubelet[3316]: I1124 00:08:37.793762 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hktjj" podStartSLOduration=3.10585 podStartE2EDuration="20.793737835s" podCreationTimestamp="2025-11-24 00:08:17 +0000 UTC" firstStartedPulling="2025-11-24 00:08:17.897827943 +0000 UTC m=+25.887850558" lastFinishedPulling="2025-11-24 00:08:35.58571576 +0000 UTC m=+43.575738393" observedRunningTime="2025-11-24 00:08:36.849473879 +0000 UTC m=+44.839496539" watchObservedRunningTime="2025-11-24 00:08:37.793737835 +0000 UTC m=+45.783760469" Nov 24 00:08:37.920833 systemd[1]: Created slice kubepods-besteffort-pod2e5b90ac_d808_4aaf_9a8a_1acb3e1260f1.slice - libcontainer container kubepods-besteffort-pod2e5b90ac_d808_4aaf_9a8a_1acb3e1260f1.slice. Nov 24 00:08:38.039018 kubelet[3316]: I1124 00:08:38.038755 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1-whisker-ca-bundle\") pod \"whisker-dfdd4b85d-wmqzw\" (UID: \"2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1\") " pod="calico-system/whisker-dfdd4b85d-wmqzw" Nov 24 00:08:38.039018 kubelet[3316]: I1124 00:08:38.038841 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1-whisker-backend-key-pair\") pod \"whisker-dfdd4b85d-wmqzw\" (UID: \"2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1\") " pod="calico-system/whisker-dfdd4b85d-wmqzw" Nov 24 00:08:38.039018 kubelet[3316]: I1124 00:08:38.038879 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68g7q\" (UniqueName: \"kubernetes.io/projected/2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1-kube-api-access-68g7q\") pod \"whisker-dfdd4b85d-wmqzw\" (UID: \"2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1\") " pod="calico-system/whisker-dfdd4b85d-wmqzw" Nov 24 00:08:38.228089 containerd[1972]: time="2025-11-24T00:08:38.227724683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dfdd4b85d-wmqzw,Uid:2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1,Namespace:calico-system,Attempt:0,}" Nov 24 00:08:38.260552 kubelet[3316]: I1124 00:08:38.259914 3316 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ec8e9c1-4321-4965-b5fd-6e54d9442fa1" path="/var/lib/kubelet/pods/1ec8e9c1-4321-4965-b5fd-6e54d9442fa1/volumes" Nov 24 00:08:38.882810 kubelet[3316]: I1124 00:08:38.882106 3316 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:08:38.990309 (udev-worker)[4642]: Network interface NamePolicy= disabled on kernel command line. Nov 24 00:08:38.992397 systemd-networkd[1838]: cali605de2d3b2e: Link UP Nov 24 00:08:38.993274 systemd-networkd[1838]: cali605de2d3b2e: Gained carrier Nov 24 00:08:39.057706 containerd[1972]: 2025-11-24 00:08:38.377 [INFO][4756] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 00:08:39.057706 containerd[1972]: 2025-11-24 00:08:38.440 [INFO][4756] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--87-k8s-whisker--dfdd4b85d--wmqzw-eth0 whisker-dfdd4b85d- calico-system 2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1 939 0 2025-11-24 00:08:37 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:dfdd4b85d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-16-87 whisker-dfdd4b85d-wmqzw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali605de2d3b2e [] [] }} ContainerID="ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47" Namespace="calico-system" Pod="whisker-dfdd4b85d-wmqzw" WorkloadEndpoint="ip--172--31--16--87-k8s-whisker--dfdd4b85d--wmqzw-" Nov 24 00:08:39.057706 containerd[1972]: 2025-11-24 00:08:38.440 [INFO][4756] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47" Namespace="calico-system" Pod="whisker-dfdd4b85d-wmqzw" WorkloadEndpoint="ip--172--31--16--87-k8s-whisker--dfdd4b85d--wmqzw-eth0" Nov 24 00:08:39.057706 containerd[1972]: 2025-11-24 00:08:38.851 [INFO][4771] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47" HandleID="k8s-pod-network.ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47" Workload="ip--172--31--16--87-k8s-whisker--dfdd4b85d--wmqzw-eth0" Nov 24 00:08:39.058092 containerd[1972]: 2025-11-24 00:08:38.855 [INFO][4771] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47" HandleID="k8s-pod-network.ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47" Workload="ip--172--31--16--87-k8s-whisker--dfdd4b85d--wmqzw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000306ca0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-87", "pod":"whisker-dfdd4b85d-wmqzw", "timestamp":"2025-11-24 00:08:38.851062632 +0000 UTC"}, Hostname:"ip-172-31-16-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:08:39.058092 containerd[1972]: 2025-11-24 00:08:38.855 [INFO][4771] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:08:39.058092 containerd[1972]: 2025-11-24 00:08:38.856 [INFO][4771] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:08:39.058092 containerd[1972]: 2025-11-24 00:08:38.857 [INFO][4771] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-87' Nov 24 00:08:39.058092 containerd[1972]: 2025-11-24 00:08:38.885 [INFO][4771] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47" host="ip-172-31-16-87" Nov 24 00:08:39.058092 containerd[1972]: 2025-11-24 00:08:38.913 [INFO][4771] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-87" Nov 24 00:08:39.058092 containerd[1972]: 2025-11-24 00:08:38.923 [INFO][4771] ipam/ipam.go 511: Trying affinity for 192.168.25.192/26 host="ip-172-31-16-87" Nov 24 00:08:39.058092 containerd[1972]: 2025-11-24 00:08:38.926 [INFO][4771] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.192/26 host="ip-172-31-16-87" Nov 24 00:08:39.058092 containerd[1972]: 2025-11-24 00:08:38.936 [INFO][4771] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.192/26 host="ip-172-31-16-87" Nov 24 00:08:39.058092 containerd[1972]: 2025-11-24 00:08:38.936 [INFO][4771] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.192/26 handle="k8s-pod-network.ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47" host="ip-172-31-16-87" Nov 24 00:08:39.064251 containerd[1972]: 2025-11-24 00:08:38.941 [INFO][4771] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47 Nov 24 00:08:39.064251 containerd[1972]: 2025-11-24 00:08:38.948 [INFO][4771] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.192/26 handle="k8s-pod-network.ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47" host="ip-172-31-16-87" Nov 24 00:08:39.064251 containerd[1972]: 2025-11-24 00:08:38.959 [INFO][4771] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.193/26] block=192.168.25.192/26 handle="k8s-pod-network.ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47" host="ip-172-31-16-87" Nov 24 00:08:39.064251 containerd[1972]: 2025-11-24 00:08:38.960 [INFO][4771] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.193/26] handle="k8s-pod-network.ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47" host="ip-172-31-16-87" Nov 24 00:08:39.064251 containerd[1972]: 2025-11-24 00:08:38.960 [INFO][4771] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:08:39.064251 containerd[1972]: 2025-11-24 00:08:38.960 [INFO][4771] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.193/26] IPv6=[] ContainerID="ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47" HandleID="k8s-pod-network.ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47" Workload="ip--172--31--16--87-k8s-whisker--dfdd4b85d--wmqzw-eth0" Nov 24 00:08:39.064506 containerd[1972]: 2025-11-24 00:08:38.964 [INFO][4756] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47" Namespace="calico-system" Pod="whisker-dfdd4b85d-wmqzw" WorkloadEndpoint="ip--172--31--16--87-k8s-whisker--dfdd4b85d--wmqzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--87-k8s-whisker--dfdd4b85d--wmqzw-eth0", GenerateName:"whisker-dfdd4b85d-", Namespace:"calico-system", SelfLink:"", UID:"2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 8, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"dfdd4b85d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-87", ContainerID:"", Pod:"whisker-dfdd4b85d-wmqzw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.25.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali605de2d3b2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:08:39.064506 containerd[1972]: 2025-11-24 00:08:38.964 [INFO][4756] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.193/32] ContainerID="ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47" Namespace="calico-system" Pod="whisker-dfdd4b85d-wmqzw" WorkloadEndpoint="ip--172--31--16--87-k8s-whisker--dfdd4b85d--wmqzw-eth0" Nov 24 00:08:39.065875 containerd[1972]: 2025-11-24 00:08:38.964 [INFO][4756] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali605de2d3b2e ContainerID="ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47" Namespace="calico-system" Pod="whisker-dfdd4b85d-wmqzw" WorkloadEndpoint="ip--172--31--16--87-k8s-whisker--dfdd4b85d--wmqzw-eth0" Nov 24 00:08:39.065875 containerd[1972]: 2025-11-24 00:08:38.996 [INFO][4756] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47" Namespace="calico-system" Pod="whisker-dfdd4b85d-wmqzw" WorkloadEndpoint="ip--172--31--16--87-k8s-whisker--dfdd4b85d--wmqzw-eth0" Nov 24 00:08:39.065955 containerd[1972]: 2025-11-24 00:08:38.999 [INFO][4756] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47" Namespace="calico-system" Pod="whisker-dfdd4b85d-wmqzw" WorkloadEndpoint="ip--172--31--16--87-k8s-whisker--dfdd4b85d--wmqzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--87-k8s-whisker--dfdd4b85d--wmqzw-eth0", GenerateName:"whisker-dfdd4b85d-", Namespace:"calico-system", SelfLink:"", UID:"2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 8, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"dfdd4b85d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-87", ContainerID:"ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47", Pod:"whisker-dfdd4b85d-wmqzw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.25.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali605de2d3b2e", MAC:"a6:3c:29:ec:c5:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:08:39.066058 containerd[1972]: 2025-11-24 00:08:39.036 [INFO][4756] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47" Namespace="calico-system" Pod="whisker-dfdd4b85d-wmqzw" WorkloadEndpoint="ip--172--31--16--87-k8s-whisker--dfdd4b85d--wmqzw-eth0" Nov 24 00:08:39.564069 (udev-worker)[4641]: Network interface NamePolicy= disabled on kernel command line. Nov 24 00:08:39.566038 systemd-networkd[1838]: vxlan.calico: Link UP Nov 24 00:08:39.566044 systemd-networkd[1838]: vxlan.calico: Gained carrier Nov 24 00:08:39.621189 containerd[1972]: time="2025-11-24T00:08:39.621121244Z" level=info msg="connecting to shim ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47" address="unix:///run/containerd/s/73f1814e3cc55f782250f3b4e8426b0eeb81825c2248f0e6656f13de40e5a0ad" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:08:39.699910 systemd[1]: Started cri-containerd-ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47.scope - libcontainer container ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47. Nov 24 00:08:39.901052 containerd[1972]: time="2025-11-24T00:08:39.900648592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dfdd4b85d-wmqzw,Uid:2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1,Namespace:calico-system,Attempt:0,} returns sandbox id \"ac8558169d91c411a91a6597a82f00d91bf2db4145bd0e68f7e7b6ce4d854d47\"" Nov 24 00:08:39.957289 containerd[1972]: time="2025-11-24T00:08:39.957224061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:08:40.238017 containerd[1972]: time="2025-11-24T00:08:40.237865750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-44qlh,Uid:a7f2741e-c2a8-4e97-9679-431279b978f1,Namespace:calico-system,Attempt:0,}" Nov 24 00:08:40.239869 containerd[1972]: time="2025-11-24T00:08:40.238298531Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:08:40.242532 containerd[1972]: time="2025-11-24T00:08:40.242403299Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:08:40.242532 containerd[1972]: time="2025-11-24T00:08:40.242466112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:08:40.244888 kubelet[3316]: E1124 00:08:40.242632 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:08:40.244888 kubelet[3316]: E1124 00:08:40.242683 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:08:40.259932 kubelet[3316]: E1124 00:08:40.259517 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:de3c06f5af6f4d53b271c97ff9b037fd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-68g7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dfdd4b85d-wmqzw_calico-system(2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:08:40.267164 containerd[1972]: time="2025-11-24T00:08:40.266937588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:08:40.451645 systemd-networkd[1838]: cali8ab4c5c0900: Link UP Nov 24 00:08:40.453554 systemd-networkd[1838]: cali8ab4c5c0900: Gained carrier Nov 24 00:08:40.490697 containerd[1972]: 2025-11-24 00:08:40.323 [INFO][4984] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--87-k8s-csi--node--driver--44qlh-eth0 csi-node-driver- calico-system a7f2741e-c2a8-4e97-9679-431279b978f1 760 0 2025-11-24 00:08:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-16-87 csi-node-driver-44qlh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8ab4c5c0900 [] [] }} ContainerID="f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741" Namespace="calico-system" Pod="csi-node-driver-44qlh" WorkloadEndpoint="ip--172--31--16--87-k8s-csi--node--driver--44qlh-" Nov 24 00:08:40.490697 containerd[1972]: 2025-11-24 00:08:40.323 [INFO][4984] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741" Namespace="calico-system" Pod="csi-node-driver-44qlh" WorkloadEndpoint="ip--172--31--16--87-k8s-csi--node--driver--44qlh-eth0" Nov 24 00:08:40.490697 containerd[1972]: 2025-11-24 00:08:40.368 [INFO][4997] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741" HandleID="k8s-pod-network.f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741" Workload="ip--172--31--16--87-k8s-csi--node--driver--44qlh-eth0" Nov 24 00:08:40.491013 containerd[1972]: 2025-11-24 00:08:40.368 [INFO][4997] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741" HandleID="k8s-pod-network.f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741" Workload="ip--172--31--16--87-k8s-csi--node--driver--44qlh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-87", "pod":"csi-node-driver-44qlh", "timestamp":"2025-11-24 00:08:40.368629659 +0000 UTC"}, Hostname:"ip-172-31-16-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:08:40.491013 containerd[1972]: 2025-11-24 00:08:40.368 [INFO][4997] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:08:40.491013 containerd[1972]: 2025-11-24 00:08:40.368 [INFO][4997] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:08:40.491013 containerd[1972]: 2025-11-24 00:08:40.368 [INFO][4997] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-87' Nov 24 00:08:40.491013 containerd[1972]: 2025-11-24 00:08:40.377 [INFO][4997] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741" host="ip-172-31-16-87" Nov 24 00:08:40.491013 containerd[1972]: 2025-11-24 00:08:40.393 [INFO][4997] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-87" Nov 24 00:08:40.491013 containerd[1972]: 2025-11-24 00:08:40.402 [INFO][4997] ipam/ipam.go 511: Trying affinity for 192.168.25.192/26 host="ip-172-31-16-87" Nov 24 00:08:40.491013 containerd[1972]: 2025-11-24 00:08:40.406 [INFO][4997] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.192/26 host="ip-172-31-16-87" Nov 24 00:08:40.491013 containerd[1972]: 2025-11-24 00:08:40.420 [INFO][4997] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.192/26 host="ip-172-31-16-87" Nov 24 00:08:40.491013 containerd[1972]: 2025-11-24 00:08:40.420 [INFO][4997] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.192/26 handle="k8s-pod-network.f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741" host="ip-172-31-16-87" Nov 24 00:08:40.492072 containerd[1972]: 2025-11-24 00:08:40.423 [INFO][4997] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741 Nov 24 00:08:40.492072 containerd[1972]: 2025-11-24 00:08:40.434 [INFO][4997] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.192/26 handle="k8s-pod-network.f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741" host="ip-172-31-16-87" Nov 24 00:08:40.492072 containerd[1972]: 2025-11-24 00:08:40.443 [INFO][4997] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.194/26] block=192.168.25.192/26 handle="k8s-pod-network.f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741" host="ip-172-31-16-87" Nov 24 00:08:40.492072 containerd[1972]: 2025-11-24 00:08:40.444 [INFO][4997] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.194/26] handle="k8s-pod-network.f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741" host="ip-172-31-16-87" Nov 24 00:08:40.492072 containerd[1972]: 2025-11-24 00:08:40.444 [INFO][4997] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:08:40.492072 containerd[1972]: 2025-11-24 00:08:40.444 [INFO][4997] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.194/26] IPv6=[] ContainerID="f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741" HandleID="k8s-pod-network.f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741" Workload="ip--172--31--16--87-k8s-csi--node--driver--44qlh-eth0" Nov 24 00:08:40.493232 containerd[1972]: 2025-11-24 00:08:40.448 [INFO][4984] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741" Namespace="calico-system" Pod="csi-node-driver-44qlh" WorkloadEndpoint="ip--172--31--16--87-k8s-csi--node--driver--44qlh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--87-k8s-csi--node--driver--44qlh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a7f2741e-c2a8-4e97-9679-431279b978f1", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 8, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-87", ContainerID:"", Pod:"csi-node-driver-44qlh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.25.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8ab4c5c0900", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:08:40.493369 containerd[1972]: 2025-11-24 00:08:40.448 [INFO][4984] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.194/32] ContainerID="f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741" Namespace="calico-system" Pod="csi-node-driver-44qlh" WorkloadEndpoint="ip--172--31--16--87-k8s-csi--node--driver--44qlh-eth0" Nov 24 00:08:40.493369 containerd[1972]: 2025-11-24 00:08:40.448 [INFO][4984] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ab4c5c0900 ContainerID="f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741" Namespace="calico-system" Pod="csi-node-driver-44qlh" WorkloadEndpoint="ip--172--31--16--87-k8s-csi--node--driver--44qlh-eth0" Nov 24 00:08:40.493369 containerd[1972]: 2025-11-24 00:08:40.453 [INFO][4984] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741" Namespace="calico-system" Pod="csi-node-driver-44qlh" WorkloadEndpoint="ip--172--31--16--87-k8s-csi--node--driver--44qlh-eth0" Nov 24 00:08:40.493504 containerd[1972]: 2025-11-24 00:08:40.453 [INFO][4984] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741" Namespace="calico-system" Pod="csi-node-driver-44qlh" WorkloadEndpoint="ip--172--31--16--87-k8s-csi--node--driver--44qlh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--87-k8s-csi--node--driver--44qlh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a7f2741e-c2a8-4e97-9679-431279b978f1", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 8, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-87", ContainerID:"f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741", Pod:"csi-node-driver-44qlh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.25.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8ab4c5c0900", MAC:"36:b4:85:08:67:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:08:40.493618 containerd[1972]: 2025-11-24 00:08:40.474 [INFO][4984] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741" Namespace="calico-system" Pod="csi-node-driver-44qlh" WorkloadEndpoint="ip--172--31--16--87-k8s-csi--node--driver--44qlh-eth0" Nov 24 00:08:40.545096 containerd[1972]: time="2025-11-24T00:08:40.544971653Z" level=info msg="connecting to shim f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741" address="unix:///run/containerd/s/4a431149c8ca44b453fadbfb733159669bbfa2618a5db53e86ca53ea8cffd074" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:08:40.560247 containerd[1972]: time="2025-11-24T00:08:40.560170826Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:08:40.569016 containerd[1972]: time="2025-11-24T00:08:40.568954928Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:08:40.569701 containerd[1972]: time="2025-11-24T00:08:40.569542066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:08:40.572877 kubelet[3316]: E1124 00:08:40.570060 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:08:40.572877 kubelet[3316]: E1124 00:08:40.570230 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:08:40.573288 kubelet[3316]: E1124 00:08:40.573219 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68g7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dfdd4b85d-wmqzw_calico-system(2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:08:40.575621 kubelet[3316]: E1124 00:08:40.575061 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dfdd4b85d-wmqzw" podUID="2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1" Nov 24 00:08:40.597728 systemd-networkd[1838]: cali605de2d3b2e: Gained IPv6LL Nov 24 00:08:40.599910 systemd[1]: Started cri-containerd-f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741.scope - libcontainer container f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741. Nov 24 00:08:40.650285 containerd[1972]: time="2025-11-24T00:08:40.650241690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-44qlh,Uid:a7f2741e-c2a8-4e97-9679-431279b978f1,Namespace:calico-system,Attempt:0,} returns sandbox id \"f76e1954de07081c4a33e22d76dde9487662e76e491976db4ea5c73b17367741\"" Nov 24 00:08:40.653034 containerd[1972]: time="2025-11-24T00:08:40.652995578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:08:40.785524 kubelet[3316]: E1124 00:08:40.784049 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dfdd4b85d-wmqzw" podUID="2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1" Nov 24 00:08:40.895138 containerd[1972]: time="2025-11-24T00:08:40.895090279Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:08:40.897444 containerd[1972]: time="2025-11-24T00:08:40.897257775Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:08:40.897444 containerd[1972]: time="2025-11-24T00:08:40.897372897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:08:40.897671 kubelet[3316]: E1124 00:08:40.897595 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:08:40.897778 kubelet[3316]: E1124 00:08:40.897704 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:08:40.898030 kubelet[3316]: E1124 00:08:40.897894 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvb8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-44qlh_calico-system(a7f2741e-c2a8-4e97-9679-431279b978f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:08:40.901268 containerd[1972]: time="2025-11-24T00:08:40.901117251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:08:41.148726 containerd[1972]: time="2025-11-24T00:08:41.148536802Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:08:41.150998 containerd[1972]: time="2025-11-24T00:08:41.150905649Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:08:41.150998 containerd[1972]: time="2025-11-24T00:08:41.150955972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:08:41.151346 kubelet[3316]: E1124 00:08:41.151246 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:08:41.151346 kubelet[3316]: E1124 00:08:41.151314 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:08:41.151615 kubelet[3316]: E1124 00:08:41.151493 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvb8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-44qlh_calico-system(a7f2741e-c2a8-4e97-9679-431279b978f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:08:41.153633 kubelet[3316]: E1124 00:08:41.153492 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-44qlh" podUID="a7f2741e-c2a8-4e97-9679-431279b978f1" Nov 24 00:08:41.237199 containerd[1972]: time="2025-11-24T00:08:41.237153133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-857d84d84d-ncvx2,Uid:293f9213-9ce6-465e-8d91-13e61a8f35a0,Namespace:calico-system,Attempt:0,}" Nov 24 00:08:41.302505 systemd-networkd[1838]: vxlan.calico: Gained IPv6LL Nov 24 00:08:41.537715 systemd-networkd[1838]: cali5b33f8aa480: Link UP Nov 24 00:08:41.539500 systemd-networkd[1838]: cali5b33f8aa480: Gained carrier Nov 24 00:08:41.582794 containerd[1972]: 2025-11-24 00:08:41.346 [INFO][5062] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--87-k8s-calico--kube--controllers--857d84d84d--ncvx2-eth0 calico-kube-controllers-857d84d84d- calico-system 293f9213-9ce6-465e-8d91-13e61a8f35a0 870 0 2025-11-24 00:08:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:857d84d84d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-16-87 calico-kube-controllers-857d84d84d-ncvx2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5b33f8aa480 [] [] }} ContainerID="2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e" Namespace="calico-system" Pod="calico-kube-controllers-857d84d84d-ncvx2" WorkloadEndpoint="ip--172--31--16--87-k8s-calico--kube--controllers--857d84d84d--ncvx2-" Nov 24 00:08:41.582794 containerd[1972]: 2025-11-24 00:08:41.346 [INFO][5062] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e" Namespace="calico-system" Pod="calico-kube-controllers-857d84d84d-ncvx2" WorkloadEndpoint="ip--172--31--16--87-k8s-calico--kube--controllers--857d84d84d--ncvx2-eth0" Nov 24 00:08:41.582794 containerd[1972]: 2025-11-24 00:08:41.405 [INFO][5074] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e" HandleID="k8s-pod-network.2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e" Workload="ip--172--31--16--87-k8s-calico--kube--controllers--857d84d84d--ncvx2-eth0" Nov 24 00:08:41.583132 containerd[1972]: 2025-11-24 00:08:41.407 [INFO][5074] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e" HandleID="k8s-pod-network.2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e" Workload="ip--172--31--16--87-k8s-calico--kube--controllers--857d84d84d--ncvx2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5a00), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-87", "pod":"calico-kube-controllers-857d84d84d-ncvx2", "timestamp":"2025-11-24 00:08:41.405978632 +0000 UTC"}, Hostname:"ip-172-31-16-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:08:41.583132 containerd[1972]: 2025-11-24 00:08:41.407 [INFO][5074] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:08:41.583132 containerd[1972]: 2025-11-24 00:08:41.407 [INFO][5074] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:08:41.583132 containerd[1972]: 2025-11-24 00:08:41.407 [INFO][5074] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-87' Nov 24 00:08:41.583132 containerd[1972]: 2025-11-24 00:08:41.438 [INFO][5074] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e" host="ip-172-31-16-87" Nov 24 00:08:41.583132 containerd[1972]: 2025-11-24 00:08:41.454 [INFO][5074] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-87" Nov 24 00:08:41.583132 containerd[1972]: 2025-11-24 00:08:41.469 [INFO][5074] ipam/ipam.go 511: Trying affinity for 192.168.25.192/26 host="ip-172-31-16-87" Nov 24 00:08:41.583132 containerd[1972]: 2025-11-24 00:08:41.473 [INFO][5074] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.192/26 host="ip-172-31-16-87" Nov 24 00:08:41.583132 containerd[1972]: 2025-11-24 00:08:41.479 [INFO][5074] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.192/26 host="ip-172-31-16-87" Nov 24 00:08:41.583553 containerd[1972]: 2025-11-24 00:08:41.479 [INFO][5074] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.192/26 handle="k8s-pod-network.2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e" host="ip-172-31-16-87" Nov 24 00:08:41.583553 containerd[1972]: 2025-11-24 00:08:41.496 [INFO][5074] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e Nov 24 00:08:41.583553 containerd[1972]: 2025-11-24 00:08:41.513 [INFO][5074] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.192/26 handle="k8s-pod-network.2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e" host="ip-172-31-16-87" Nov 24 00:08:41.583553 containerd[1972]: 2025-11-24 00:08:41.524 [INFO][5074] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.195/26] block=192.168.25.192/26 handle="k8s-pod-network.2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e" host="ip-172-31-16-87" Nov 24 00:08:41.583553 containerd[1972]: 2025-11-24 00:08:41.524 [INFO][5074] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.195/26] handle="k8s-pod-network.2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e" host="ip-172-31-16-87" Nov 24 00:08:41.583553 containerd[1972]: 2025-11-24 00:08:41.525 [INFO][5074] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:08:41.583553 containerd[1972]: 2025-11-24 00:08:41.525 [INFO][5074] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.195/26] IPv6=[] ContainerID="2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e" HandleID="k8s-pod-network.2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e" Workload="ip--172--31--16--87-k8s-calico--kube--controllers--857d84d84d--ncvx2-eth0" Nov 24 00:08:41.584985 containerd[1972]: 2025-11-24 00:08:41.530 [INFO][5062] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e" Namespace="calico-system" Pod="calico-kube-controllers-857d84d84d-ncvx2" WorkloadEndpoint="ip--172--31--16--87-k8s-calico--kube--controllers--857d84d84d--ncvx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--87-k8s-calico--kube--controllers--857d84d84d--ncvx2-eth0", GenerateName:"calico-kube-controllers-857d84d84d-", Namespace:"calico-system", SelfLink:"", UID:"293f9213-9ce6-465e-8d91-13e61a8f35a0", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 8, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"857d84d84d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-87", ContainerID:"", Pod:"calico-kube-controllers-857d84d84d-ncvx2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5b33f8aa480", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:08:41.585798 containerd[1972]: 2025-11-24 00:08:41.531 [INFO][5062] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.195/32] ContainerID="2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e" Namespace="calico-system" Pod="calico-kube-controllers-857d84d84d-ncvx2" WorkloadEndpoint="ip--172--31--16--87-k8s-calico--kube--controllers--857d84d84d--ncvx2-eth0" Nov 24 00:08:41.585798 containerd[1972]: 2025-11-24 00:08:41.531 [INFO][5062] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b33f8aa480 ContainerID="2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e" Namespace="calico-system" Pod="calico-kube-controllers-857d84d84d-ncvx2" WorkloadEndpoint="ip--172--31--16--87-k8s-calico--kube--controllers--857d84d84d--ncvx2-eth0" Nov 24 00:08:41.585798 containerd[1972]: 2025-11-24 00:08:41.543 [INFO][5062] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e" Namespace="calico-system" Pod="calico-kube-controllers-857d84d84d-ncvx2" WorkloadEndpoint="ip--172--31--16--87-k8s-calico--kube--controllers--857d84d84d--ncvx2-eth0" Nov 24 00:08:41.586826 containerd[1972]: 2025-11-24 00:08:41.543 [INFO][5062] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e" Namespace="calico-system" Pod="calico-kube-controllers-857d84d84d-ncvx2" WorkloadEndpoint="ip--172--31--16--87-k8s-calico--kube--controllers--857d84d84d--ncvx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--87-k8s-calico--kube--controllers--857d84d84d--ncvx2-eth0", GenerateName:"calico-kube-controllers-857d84d84d-", Namespace:"calico-system", SelfLink:"", UID:"293f9213-9ce6-465e-8d91-13e61a8f35a0", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 8, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"857d84d84d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-87", ContainerID:"2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e", Pod:"calico-kube-controllers-857d84d84d-ncvx2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5b33f8aa480", MAC:"7a:39:56:99:f0:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:08:41.587105 containerd[1972]: 2025-11-24 00:08:41.577 [INFO][5062] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e" Namespace="calico-system" Pod="calico-kube-controllers-857d84d84d-ncvx2" WorkloadEndpoint="ip--172--31--16--87-k8s-calico--kube--controllers--857d84d84d--ncvx2-eth0" Nov 24 00:08:41.622057 systemd-networkd[1838]: cali8ab4c5c0900: Gained IPv6LL Nov 24 00:08:41.642549 containerd[1972]: time="2025-11-24T00:08:41.642492799Z" level=info msg="connecting to shim 2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e" address="unix:///run/containerd/s/0a3836f3213076495d530553fc086e8fdc7f4aa5d24d2bb97c4864da40a51b20" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:08:41.691068 systemd[1]: Started cri-containerd-2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e.scope - libcontainer container 2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e. Nov 24 00:08:41.800694 kubelet[3316]: E1124 00:08:41.800317 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-44qlh" podUID="a7f2741e-c2a8-4e97-9679-431279b978f1" Nov 24 00:08:41.804081 kubelet[3316]: E1124 00:08:41.802106 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dfdd4b85d-wmqzw" podUID="2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1" Nov 24 00:08:41.823924 containerd[1972]: time="2025-11-24T00:08:41.823856209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-857d84d84d-ncvx2,Uid:293f9213-9ce6-465e-8d91-13e61a8f35a0,Namespace:calico-system,Attempt:0,} returns sandbox id \"2d5101d105f2ad89662a62d4292c616bba411630ba6be7710f1d27630980c52e\"" Nov 24 00:08:41.828481 containerd[1972]: time="2025-11-24T00:08:41.828411867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:08:42.113308 containerd[1972]: time="2025-11-24T00:08:42.112880454Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:08:42.118550 containerd[1972]: time="2025-11-24T00:08:42.118412728Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:08:42.118550 containerd[1972]: time="2025-11-24T00:08:42.118508776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:08:42.118804 kubelet[3316]: E1124 00:08:42.118746 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:08:42.118864 kubelet[3316]: E1124 00:08:42.118808 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:08:42.119134 kubelet[3316]: E1124 00:08:42.119024 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hqp8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-857d84d84d-ncvx2_calico-system(293f9213-9ce6-465e-8d91-13e61a8f35a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:08:42.120790 kubelet[3316]: E1124 00:08:42.120490 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-857d84d84d-ncvx2" podUID="293f9213-9ce6-465e-8d91-13e61a8f35a0" Nov 24 00:08:42.241069 containerd[1972]: time="2025-11-24T00:08:42.241015399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc86c6ddc-jzjm6,Uid:63a82b4c-a5db-46d5-9bde-8b4be9966835,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:08:42.241455 containerd[1972]: time="2025-11-24T00:08:42.241340555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-624q9,Uid:1987144d-d184-44c0-92fb-e90e141fbcf8,Namespace:kube-system,Attempt:0,}" Nov 24 00:08:42.242401 containerd[1972]: time="2025-11-24T00:08:42.241079553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bqwch,Uid:348cf778-5f5c-4d14-8753-45e0fb5f1d98,Namespace:kube-system,Attempt:0,}" Nov 24 00:08:42.242750 containerd[1972]: time="2025-11-24T00:08:42.242725229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jqnrx,Uid:18d54b97-5424-4119-892c-ebd148db0571,Namespace:calico-system,Attempt:0,}" Nov 24 00:08:42.683676 systemd-networkd[1838]: cali64481a8f21a: Link UP Nov 24 00:08:42.683891 systemd-networkd[1838]: cali64481a8f21a: Gained carrier Nov 24 00:08:42.716082 containerd[1972]: 2025-11-24 00:08:42.437 [INFO][5134] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jzjm6-eth0 calico-apiserver-7cc86c6ddc- calico-apiserver 63a82b4c-a5db-46d5-9bde-8b4be9966835 869 0 2025-11-24 00:08:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cc86c6ddc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-87 calico-apiserver-7cc86c6ddc-jzjm6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali64481a8f21a [] [] }} ContainerID="dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa" Namespace="calico-apiserver" Pod="calico-apiserver-7cc86c6ddc-jzjm6" WorkloadEndpoint="ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jzjm6-" Nov 24 00:08:42.716082 containerd[1972]: 2025-11-24 00:08:42.438 [INFO][5134] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa" Namespace="calico-apiserver" Pod="calico-apiserver-7cc86c6ddc-jzjm6" WorkloadEndpoint="ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jzjm6-eth0" Nov 24 00:08:42.716082 containerd[1972]: 2025-11-24 00:08:42.589 [INFO][5185] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa" HandleID="k8s-pod-network.dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa" Workload="ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jzjm6-eth0" Nov 24 00:08:42.716470 containerd[1972]: 2025-11-24 00:08:42.590 [INFO][5185] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa" HandleID="k8s-pod-network.dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa" Workload="ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jzjm6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e340), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-87", "pod":"calico-apiserver-7cc86c6ddc-jzjm6", "timestamp":"2025-11-24 00:08:42.589049112 +0000 UTC"}, Hostname:"ip-172-31-16-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:08:42.716470 containerd[1972]: 2025-11-24 00:08:42.590 [INFO][5185] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:08:42.716470 containerd[1972]: 2025-11-24 00:08:42.591 [INFO][5185] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:08:42.716470 containerd[1972]: 2025-11-24 00:08:42.591 [INFO][5185] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-87' Nov 24 00:08:42.716470 containerd[1972]: 2025-11-24 00:08:42.628 [INFO][5185] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa" host="ip-172-31-16-87" Nov 24 00:08:42.716470 containerd[1972]: 2025-11-24 00:08:42.636 [INFO][5185] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-87" Nov 24 00:08:42.716470 containerd[1972]: 2025-11-24 00:08:42.644 [INFO][5185] ipam/ipam.go 511: Trying affinity for 192.168.25.192/26 host="ip-172-31-16-87" Nov 24 00:08:42.716470 containerd[1972]: 2025-11-24 00:08:42.649 [INFO][5185] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.192/26 host="ip-172-31-16-87" Nov 24 00:08:42.716470 containerd[1972]: 2025-11-24 00:08:42.653 [INFO][5185] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.192/26 host="ip-172-31-16-87" Nov 24 00:08:42.716893 containerd[1972]: 2025-11-24 00:08:42.653 [INFO][5185] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.192/26 handle="k8s-pod-network.dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa" host="ip-172-31-16-87" Nov 24 00:08:42.716893 containerd[1972]: 2025-11-24 00:08:42.656 [INFO][5185] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa Nov 24 00:08:42.716893 containerd[1972]: 2025-11-24 00:08:42.661 [INFO][5185] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.192/26 handle="k8s-pod-network.dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa" host="ip-172-31-16-87" Nov 24 00:08:42.716893 containerd[1972]: 2025-11-24 00:08:42.672 [INFO][5185] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.196/26] block=192.168.25.192/26 handle="k8s-pod-network.dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa" host="ip-172-31-16-87" Nov 24 00:08:42.716893 containerd[1972]: 2025-11-24 00:08:42.672 [INFO][5185] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.196/26] handle="k8s-pod-network.dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa" host="ip-172-31-16-87" Nov 24 00:08:42.716893 containerd[1972]: 2025-11-24 00:08:42.672 [INFO][5185] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:08:42.716893 containerd[1972]: 2025-11-24 00:08:42.672 [INFO][5185] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.196/26] IPv6=[] ContainerID="dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa" HandleID="k8s-pod-network.dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa" Workload="ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jzjm6-eth0" Nov 24 00:08:42.718092 containerd[1972]: 2025-11-24 00:08:42.678 [INFO][5134] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa" Namespace="calico-apiserver" Pod="calico-apiserver-7cc86c6ddc-jzjm6" WorkloadEndpoint="ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jzjm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jzjm6-eth0", GenerateName:"calico-apiserver-7cc86c6ddc-", Namespace:"calico-apiserver", SelfLink:"", UID:"63a82b4c-a5db-46d5-9bde-8b4be9966835", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 8, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cc86c6ddc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-87", ContainerID:"", Pod:"calico-apiserver-7cc86c6ddc-jzjm6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64481a8f21a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:08:42.718429 containerd[1972]: 2025-11-24 00:08:42.678 [INFO][5134] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.196/32] ContainerID="dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa" Namespace="calico-apiserver" Pod="calico-apiserver-7cc86c6ddc-jzjm6" WorkloadEndpoint="ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jzjm6-eth0" Nov 24 00:08:42.718429 containerd[1972]: 2025-11-24 00:08:42.678 [INFO][5134] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64481a8f21a ContainerID="dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa" Namespace="calico-apiserver" Pod="calico-apiserver-7cc86c6ddc-jzjm6" WorkloadEndpoint="ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jzjm6-eth0" Nov 24 00:08:42.718429 containerd[1972]: 2025-11-24 00:08:42.680 [INFO][5134] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa" Namespace="calico-apiserver" Pod="calico-apiserver-7cc86c6ddc-jzjm6" WorkloadEndpoint="ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jzjm6-eth0" Nov 24 00:08:42.719367 containerd[1972]: 2025-11-24 00:08:42.681 [INFO][5134] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa" Namespace="calico-apiserver" Pod="calico-apiserver-7cc86c6ddc-jzjm6" WorkloadEndpoint="ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jzjm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jzjm6-eth0", GenerateName:"calico-apiserver-7cc86c6ddc-", Namespace:"calico-apiserver", SelfLink:"", UID:"63a82b4c-a5db-46d5-9bde-8b4be9966835", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 8, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cc86c6ddc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-87", ContainerID:"dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa", Pod:"calico-apiserver-7cc86c6ddc-jzjm6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64481a8f21a", MAC:"f2:5f:fb:be:fc:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:08:42.719501 containerd[1972]: 2025-11-24 00:08:42.695 [INFO][5134] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa" Namespace="calico-apiserver" Pod="calico-apiserver-7cc86c6ddc-jzjm6" WorkloadEndpoint="ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jzjm6-eth0" Nov 24 00:08:42.790827 containerd[1972]: time="2025-11-24T00:08:42.790761824Z" level=info msg="connecting to shim dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa" address="unix:///run/containerd/s/9aabe493564bec597f299efbf34c5ec2f289208de14e03c8006ec688754b32bd" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:08:42.811295 kubelet[3316]: E1124 00:08:42.811238 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-857d84d84d-ncvx2" podUID="293f9213-9ce6-465e-8d91-13e61a8f35a0" Nov 24 00:08:42.853759 systemd-networkd[1838]: calia19317b3c91: Link UP Nov 24 00:08:42.858510 systemd-networkd[1838]: calia19317b3c91: Gained carrier Nov 24 00:08:42.911780 containerd[1972]: 2025-11-24 00:08:42.498 [INFO][5144] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--87-k8s-coredns--674b8bbfcf--624q9-eth0 coredns-674b8bbfcf- kube-system 1987144d-d184-44c0-92fb-e90e141fbcf8 865 0 2025-11-24 00:07:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-87 coredns-674b8bbfcf-624q9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia19317b3c91 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0" Namespace="kube-system" Pod="coredns-674b8bbfcf-624q9" WorkloadEndpoint="ip--172--31--16--87-k8s-coredns--674b8bbfcf--624q9-" Nov 24 00:08:42.911780 containerd[1972]: 2025-11-24 00:08:42.498 [INFO][5144] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0" Namespace="kube-system" Pod="coredns-674b8bbfcf-624q9" WorkloadEndpoint="ip--172--31--16--87-k8s-coredns--674b8bbfcf--624q9-eth0" Nov 24 00:08:42.911780 containerd[1972]: 2025-11-24 00:08:42.592 [INFO][5201] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0" HandleID="k8s-pod-network.a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0" Workload="ip--172--31--16--87-k8s-coredns--674b8bbfcf--624q9-eth0" Nov 24 00:08:42.913501 containerd[1972]: 2025-11-24 00:08:42.592 [INFO][5201] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0" HandleID="k8s-pod-network.a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0" Workload="ip--172--31--16--87-k8s-coredns--674b8bbfcf--624q9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000371090), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-87", "pod":"coredns-674b8bbfcf-624q9", "timestamp":"2025-11-24 00:08:42.592311688 +0000 UTC"}, Hostname:"ip-172-31-16-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:08:42.913501 containerd[1972]: 2025-11-24 00:08:42.592 [INFO][5201] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:08:42.913501 containerd[1972]: 2025-11-24 00:08:42.672 [INFO][5201] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:08:42.913501 containerd[1972]: 2025-11-24 00:08:42.672 [INFO][5201] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-87' Nov 24 00:08:42.913501 containerd[1972]: 2025-11-24 00:08:42.729 [INFO][5201] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0" host="ip-172-31-16-87" Nov 24 00:08:42.913501 containerd[1972]: 2025-11-24 00:08:42.736 [INFO][5201] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-87" Nov 24 00:08:42.913501 containerd[1972]: 2025-11-24 00:08:42.745 [INFO][5201] ipam/ipam.go 511: Trying affinity for 192.168.25.192/26 host="ip-172-31-16-87" Nov 24 00:08:42.913501 containerd[1972]: 2025-11-24 00:08:42.748 [INFO][5201] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.192/26 host="ip-172-31-16-87" Nov 24 00:08:42.913501 containerd[1972]: 2025-11-24 00:08:42.757 [INFO][5201] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.192/26 host="ip-172-31-16-87" Nov 24 00:08:42.913501 containerd[1972]: 2025-11-24 00:08:42.757 [INFO][5201] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.192/26 handle="k8s-pod-network.a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0" host="ip-172-31-16-87" Nov 24 00:08:42.915703 containerd[1972]: 2025-11-24 00:08:42.760 [INFO][5201] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0 Nov 24 00:08:42.915703 containerd[1972]: 2025-11-24 00:08:42.776 [INFO][5201] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.192/26 handle="k8s-pod-network.a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0" host="ip-172-31-16-87" Nov 24 00:08:42.915703 containerd[1972]: 2025-11-24 00:08:42.816 [INFO][5201] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.197/26] block=192.168.25.192/26 handle="k8s-pod-network.a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0" host="ip-172-31-16-87" Nov 24 00:08:42.915703 containerd[1972]: 2025-11-24 00:08:42.817 [INFO][5201] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.197/26] handle="k8s-pod-network.a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0" host="ip-172-31-16-87" Nov 24 00:08:42.915703 containerd[1972]: 2025-11-24 00:08:42.818 [INFO][5201] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:08:42.915703 containerd[1972]: 2025-11-24 00:08:42.820 [INFO][5201] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.197/26] IPv6=[] ContainerID="a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0" HandleID="k8s-pod-network.a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0" Workload="ip--172--31--16--87-k8s-coredns--674b8bbfcf--624q9-eth0" Nov 24 00:08:42.916374 containerd[1972]: 2025-11-24 00:08:42.843 [INFO][5144] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0" Namespace="kube-system" Pod="coredns-674b8bbfcf-624q9" WorkloadEndpoint="ip--172--31--16--87-k8s-coredns--674b8bbfcf--624q9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--87-k8s-coredns--674b8bbfcf--624q9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1987144d-d184-44c0-92fb-e90e141fbcf8", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 7, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-87", ContainerID:"", Pod:"coredns-674b8bbfcf-624q9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia19317b3c91", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:08:42.916374 containerd[1972]: 2025-11-24 00:08:42.844 [INFO][5144] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.197/32] ContainerID="a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0" Namespace="kube-system" Pod="coredns-674b8bbfcf-624q9" WorkloadEndpoint="ip--172--31--16--87-k8s-coredns--674b8bbfcf--624q9-eth0" Nov 24 00:08:42.916374 containerd[1972]: 2025-11-24 00:08:42.844 [INFO][5144] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia19317b3c91 ContainerID="a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0" Namespace="kube-system" Pod="coredns-674b8bbfcf-624q9" WorkloadEndpoint="ip--172--31--16--87-k8s-coredns--674b8bbfcf--624q9-eth0" Nov 24 00:08:42.916374 containerd[1972]: 2025-11-24 00:08:42.864 [INFO][5144] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0" Namespace="kube-system" Pod="coredns-674b8bbfcf-624q9" WorkloadEndpoint="ip--172--31--16--87-k8s-coredns--674b8bbfcf--624q9-eth0" Nov 24 00:08:42.916374 containerd[1972]: 2025-11-24 00:08:42.870 [INFO][5144] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0" Namespace="kube-system" Pod="coredns-674b8bbfcf-624q9" WorkloadEndpoint="ip--172--31--16--87-k8s-coredns--674b8bbfcf--624q9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--87-k8s-coredns--674b8bbfcf--624q9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1987144d-d184-44c0-92fb-e90e141fbcf8", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 7, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-87", ContainerID:"a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0", Pod:"coredns-674b8bbfcf-624q9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia19317b3c91", MAC:"d2:1e:61:da:df:42", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:08:42.916374 containerd[1972]: 2025-11-24 00:08:42.898 [INFO][5144] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0" Namespace="kube-system" Pod="coredns-674b8bbfcf-624q9" WorkloadEndpoint="ip--172--31--16--87-k8s-coredns--674b8bbfcf--624q9-eth0" Nov 24 00:08:42.950836 systemd[1]: Started cri-containerd-dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa.scope - libcontainer container dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa. Nov 24 00:08:43.014261 systemd-networkd[1838]: calie42b2bea121: Link UP Nov 24 00:08:43.016818 systemd-networkd[1838]: calie42b2bea121: Gained carrier Nov 24 00:08:43.042132 containerd[1972]: time="2025-11-24T00:08:43.042003840Z" level=info msg="connecting to shim a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0" address="unix:///run/containerd/s/52e25ed18bfe5de89892dde9ad9aafa4416055a12fd58a6effbf873f5f7da8de" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:08:43.073942 containerd[1972]: 2025-11-24 00:08:42.483 [INFO][5156] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--87-k8s-goldmane--666569f655--jqnrx-eth0 goldmane-666569f655- calico-system 18d54b97-5424-4119-892c-ebd148db0571 871 0 2025-11-24 00:08:14 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-16-87 goldmane-666569f655-jqnrx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie42b2bea121 [] [] }} ContainerID="9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485" Namespace="calico-system" Pod="goldmane-666569f655-jqnrx" WorkloadEndpoint="ip--172--31--16--87-k8s-goldmane--666569f655--jqnrx-" Nov 24 00:08:43.073942 containerd[1972]: 2025-11-24 00:08:42.484 [INFO][5156] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485" Namespace="calico-system" Pod="goldmane-666569f655-jqnrx" WorkloadEndpoint="ip--172--31--16--87-k8s-goldmane--666569f655--jqnrx-eth0" Nov 24 00:08:43.073942 containerd[1972]: 2025-11-24 00:08:42.625 [INFO][5191] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485" HandleID="k8s-pod-network.9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485" Workload="ip--172--31--16--87-k8s-goldmane--666569f655--jqnrx-eth0" Nov 24 00:08:43.073942 containerd[1972]: 2025-11-24 00:08:42.625 [INFO][5191] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485" HandleID="k8s-pod-network.9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485" Workload="ip--172--31--16--87-k8s-goldmane--666569f655--jqnrx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000269a70), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-87", "pod":"goldmane-666569f655-jqnrx", "timestamp":"2025-11-24 00:08:42.625589149 +0000 UTC"}, Hostname:"ip-172-31-16-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:08:43.073942 containerd[1972]: 2025-11-24 00:08:42.625 [INFO][5191] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:08:43.073942 containerd[1972]: 2025-11-24 00:08:42.817 [INFO][5191] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:08:43.073942 containerd[1972]: 2025-11-24 00:08:42.817 [INFO][5191] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-87' Nov 24 00:08:43.073942 containerd[1972]: 2025-11-24 00:08:42.867 [INFO][5191] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485" host="ip-172-31-16-87" Nov 24 00:08:43.073942 containerd[1972]: 2025-11-24 00:08:42.906 [INFO][5191] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-87" Nov 24 00:08:43.073942 containerd[1972]: 2025-11-24 00:08:42.931 [INFO][5191] ipam/ipam.go 511: Trying affinity for 192.168.25.192/26 host="ip-172-31-16-87" Nov 24 00:08:43.073942 containerd[1972]: 2025-11-24 00:08:42.938 [INFO][5191] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.192/26 host="ip-172-31-16-87" Nov 24 00:08:43.073942 containerd[1972]: 2025-11-24 00:08:42.944 [INFO][5191] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.192/26 host="ip-172-31-16-87" Nov 24 00:08:43.073942 containerd[1972]: 2025-11-24 00:08:42.945 [INFO][5191] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.192/26 handle="k8s-pod-network.9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485" host="ip-172-31-16-87" Nov 24 00:08:43.073942 containerd[1972]: 2025-11-24 00:08:42.950 [INFO][5191] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485 Nov 24 00:08:43.073942 containerd[1972]: 2025-11-24 00:08:42.971 [INFO][5191] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.192/26 handle="k8s-pod-network.9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485" host="ip-172-31-16-87" Nov 24 00:08:43.073942 containerd[1972]: 2025-11-24 00:08:42.985 [INFO][5191] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.198/26] block=192.168.25.192/26 handle="k8s-pod-network.9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485" host="ip-172-31-16-87" Nov 24 00:08:43.073942 containerd[1972]: 2025-11-24 00:08:42.985 [INFO][5191] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.198/26] handle="k8s-pod-network.9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485" host="ip-172-31-16-87" Nov 24 00:08:43.073942 containerd[1972]: 2025-11-24 00:08:42.986 [INFO][5191] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:08:43.073942 containerd[1972]: 2025-11-24 00:08:42.986 [INFO][5191] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.198/26] IPv6=[] ContainerID="9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485" HandleID="k8s-pod-network.9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485" Workload="ip--172--31--16--87-k8s-goldmane--666569f655--jqnrx-eth0" Nov 24 00:08:43.076092 containerd[1972]: 2025-11-24 00:08:43.006 [INFO][5156] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485" Namespace="calico-system" Pod="goldmane-666569f655-jqnrx" WorkloadEndpoint="ip--172--31--16--87-k8s-goldmane--666569f655--jqnrx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--87-k8s-goldmane--666569f655--jqnrx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"18d54b97-5424-4119-892c-ebd148db0571", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 8, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-87", ContainerID:"", Pod:"goldmane-666569f655-jqnrx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.25.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie42b2bea121", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:08:43.076092 containerd[1972]: 2025-11-24 00:08:43.007 [INFO][5156] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.198/32] ContainerID="9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485" Namespace="calico-system" Pod="goldmane-666569f655-jqnrx" WorkloadEndpoint="ip--172--31--16--87-k8s-goldmane--666569f655--jqnrx-eth0" Nov 24 00:08:43.076092 containerd[1972]: 2025-11-24 00:08:43.007 [INFO][5156] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie42b2bea121 ContainerID="9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485" Namespace="calico-system" Pod="goldmane-666569f655-jqnrx" WorkloadEndpoint="ip--172--31--16--87-k8s-goldmane--666569f655--jqnrx-eth0" Nov 24 00:08:43.076092 containerd[1972]: 2025-11-24 00:08:43.018 [INFO][5156] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485" Namespace="calico-system" Pod="goldmane-666569f655-jqnrx" WorkloadEndpoint="ip--172--31--16--87-k8s-goldmane--666569f655--jqnrx-eth0" Nov 24 00:08:43.076092 containerd[1972]: 2025-11-24 00:08:43.019 [INFO][5156] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485" Namespace="calico-system" Pod="goldmane-666569f655-jqnrx" WorkloadEndpoint="ip--172--31--16--87-k8s-goldmane--666569f655--jqnrx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--87-k8s-goldmane--666569f655--jqnrx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"18d54b97-5424-4119-892c-ebd148db0571", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 8, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-87", ContainerID:"9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485", Pod:"goldmane-666569f655-jqnrx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.25.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie42b2bea121", MAC:"66:21:b1:ba:7c:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:08:43.076092 containerd[1972]: 2025-11-24 00:08:43.066 [INFO][5156] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485" Namespace="calico-system" Pod="goldmane-666569f655-jqnrx" WorkloadEndpoint="ip--172--31--16--87-k8s-goldmane--666569f655--jqnrx-eth0" Nov 24 00:08:43.137038 systemd[1]: Started cri-containerd-a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0.scope - libcontainer container a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0. Nov 24 00:08:43.176487 containerd[1972]: time="2025-11-24T00:08:43.175800733Z" level=info msg="connecting to shim 9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485" address="unix:///run/containerd/s/2e19b3719020560e0d74d03df2bd68364f745359880cb3f6263b207eddfe7dbf" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:08:43.213331 systemd-networkd[1838]: calic0d6e99c2af: Link UP Nov 24 00:08:43.220629 systemd-networkd[1838]: calic0d6e99c2af: Gained carrier Nov 24 00:08:43.265493 containerd[1972]: 2025-11-24 00:08:42.468 [INFO][5158] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--87-k8s-coredns--674b8bbfcf--bqwch-eth0 coredns-674b8bbfcf- kube-system 348cf778-5f5c-4d14-8753-45e0fb5f1d98 859 0 2025-11-24 00:07:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-87 coredns-674b8bbfcf-bqwch eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic0d6e99c2af [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2" Namespace="kube-system" Pod="coredns-674b8bbfcf-bqwch" WorkloadEndpoint="ip--172--31--16--87-k8s-coredns--674b8bbfcf--bqwch-" Nov 24 00:08:43.265493 containerd[1972]: 2025-11-24 00:08:42.471 [INFO][5158] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2" Namespace="kube-system" Pod="coredns-674b8bbfcf-bqwch" WorkloadEndpoint="ip--172--31--16--87-k8s-coredns--674b8bbfcf--bqwch-eth0" Nov 24 00:08:43.265493 containerd[1972]: 2025-11-24 00:08:42.628 [INFO][5196] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2" HandleID="k8s-pod-network.90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2" Workload="ip--172--31--16--87-k8s-coredns--674b8bbfcf--bqwch-eth0" Nov 24 00:08:43.265493 containerd[1972]: 2025-11-24 00:08:42.628 [INFO][5196] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2" HandleID="k8s-pod-network.90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2" Workload="ip--172--31--16--87-k8s-coredns--674b8bbfcf--bqwch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000275760), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-87", "pod":"coredns-674b8bbfcf-bqwch", "timestamp":"2025-11-24 00:08:42.628123928 +0000 UTC"}, Hostname:"ip-172-31-16-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:08:43.265493 containerd[1972]: 2025-11-24 00:08:42.628 [INFO][5196] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:08:43.265493 containerd[1972]: 2025-11-24 00:08:42.986 [INFO][5196] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:08:43.265493 containerd[1972]: 2025-11-24 00:08:42.986 [INFO][5196] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-87' Nov 24 00:08:43.265493 containerd[1972]: 2025-11-24 00:08:43.015 [INFO][5196] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2" host="ip-172-31-16-87" Nov 24 00:08:43.265493 containerd[1972]: 2025-11-24 00:08:43.051 [INFO][5196] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-87" Nov 24 00:08:43.265493 containerd[1972]: 2025-11-24 00:08:43.116 [INFO][5196] ipam/ipam.go 511: Trying affinity for 192.168.25.192/26 host="ip-172-31-16-87" Nov 24 00:08:43.265493 containerd[1972]: 2025-11-24 00:08:43.126 [INFO][5196] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.192/26 host="ip-172-31-16-87" Nov 24 00:08:43.265493 containerd[1972]: 2025-11-24 00:08:43.134 [INFO][5196] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.192/26 host="ip-172-31-16-87" Nov 24 00:08:43.265493 containerd[1972]: 2025-11-24 00:08:43.134 [INFO][5196] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.192/26 handle="k8s-pod-network.90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2" host="ip-172-31-16-87" Nov 24 00:08:43.265493 containerd[1972]: 2025-11-24 00:08:43.145 [INFO][5196] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2 Nov 24 00:08:43.265493 containerd[1972]: 2025-11-24 00:08:43.159 [INFO][5196] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.192/26 handle="k8s-pod-network.90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2" host="ip-172-31-16-87" Nov 24 00:08:43.265493 containerd[1972]: 2025-11-24 00:08:43.182 [INFO][5196] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.199/26] block=192.168.25.192/26 handle="k8s-pod-network.90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2" host="ip-172-31-16-87" Nov 24 00:08:43.265493 containerd[1972]: 2025-11-24 00:08:43.182 [INFO][5196] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.199/26] handle="k8s-pod-network.90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2" host="ip-172-31-16-87" Nov 24 00:08:43.265493 containerd[1972]: 2025-11-24 00:08:43.182 [INFO][5196] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:08:43.265493 containerd[1972]: 2025-11-24 00:08:43.182 [INFO][5196] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.199/26] IPv6=[] ContainerID="90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2" HandleID="k8s-pod-network.90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2" Workload="ip--172--31--16--87-k8s-coredns--674b8bbfcf--bqwch-eth0" Nov 24 00:08:43.266548 containerd[1972]: 2025-11-24 00:08:43.197 [INFO][5158] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2" Namespace="kube-system" Pod="coredns-674b8bbfcf-bqwch" WorkloadEndpoint="ip--172--31--16--87-k8s-coredns--674b8bbfcf--bqwch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--87-k8s-coredns--674b8bbfcf--bqwch-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"348cf778-5f5c-4d14-8753-45e0fb5f1d98", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 7, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-87", ContainerID:"", Pod:"coredns-674b8bbfcf-bqwch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic0d6e99c2af", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:08:43.266548 containerd[1972]: 2025-11-24 00:08:43.198 [INFO][5158] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.199/32] ContainerID="90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2" Namespace="kube-system" Pod="coredns-674b8bbfcf-bqwch" WorkloadEndpoint="ip--172--31--16--87-k8s-coredns--674b8bbfcf--bqwch-eth0" Nov 24 00:08:43.266548 containerd[1972]: 2025-11-24 00:08:43.199 [INFO][5158] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic0d6e99c2af ContainerID="90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2" Namespace="kube-system" Pod="coredns-674b8bbfcf-bqwch" WorkloadEndpoint="ip--172--31--16--87-k8s-coredns--674b8bbfcf--bqwch-eth0" Nov 24 00:08:43.266548 containerd[1972]: 2025-11-24 00:08:43.227 [INFO][5158] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2" Namespace="kube-system" Pod="coredns-674b8bbfcf-bqwch" WorkloadEndpoint="ip--172--31--16--87-k8s-coredns--674b8bbfcf--bqwch-eth0" Nov 24 00:08:43.266548 containerd[1972]: 2025-11-24 00:08:43.229 [INFO][5158] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2" Namespace="kube-system" Pod="coredns-674b8bbfcf-bqwch" WorkloadEndpoint="ip--172--31--16--87-k8s-coredns--674b8bbfcf--bqwch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--87-k8s-coredns--674b8bbfcf--bqwch-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"348cf778-5f5c-4d14-8753-45e0fb5f1d98", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 7, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-87", ContainerID:"90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2", Pod:"coredns-674b8bbfcf-bqwch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic0d6e99c2af", MAC:"d6:c3:01:4d:ed:b1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:08:43.266548 containerd[1972]: 2025-11-24 00:08:43.252 [INFO][5158] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2" Namespace="kube-system" Pod="coredns-674b8bbfcf-bqwch" WorkloadEndpoint="ip--172--31--16--87-k8s-coredns--674b8bbfcf--bqwch-eth0" Nov 24 00:08:43.283336 systemd[1]: Started cri-containerd-9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485.scope - libcontainer container 9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485. Nov 24 00:08:43.319170 containerd[1972]: time="2025-11-24T00:08:43.319065510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-624q9,Uid:1987144d-d184-44c0-92fb-e90e141fbcf8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0\"" Nov 24 00:08:43.352950 containerd[1972]: time="2025-11-24T00:08:43.352519444Z" level=info msg="connecting to shim 90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2" address="unix:///run/containerd/s/3b9f09c35b3d2b17120fcf319925648318f3a99019f6aac3acad52da09173b30" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:08:43.372362 containerd[1972]: time="2025-11-24T00:08:43.372305389Z" level=info msg="CreateContainer within sandbox \"a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 00:08:43.423210 containerd[1972]: time="2025-11-24T00:08:43.423091051Z" level=info msg="Container f30d533edf7d3da0300cfccc124c9576dd132875dd6e343bce6af7af66ce68f1: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:08:43.423960 systemd[1]: Started cri-containerd-90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2.scope - libcontainer container 90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2. Nov 24 00:08:43.441512 containerd[1972]: time="2025-11-24T00:08:43.441271739Z" level=info msg="CreateContainer within sandbox \"a761da795575702ba660890e28477cafcba526e0718a20884d1f8c51b4077dc0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f30d533edf7d3da0300cfccc124c9576dd132875dd6e343bce6af7af66ce68f1\"" Nov 24 00:08:43.442958 containerd[1972]: time="2025-11-24T00:08:43.442920922Z" level=info msg="StartContainer for \"f30d533edf7d3da0300cfccc124c9576dd132875dd6e343bce6af7af66ce68f1\"" Nov 24 00:08:43.446281 containerd[1972]: time="2025-11-24T00:08:43.445845076Z" level=info msg="connecting to shim f30d533edf7d3da0300cfccc124c9576dd132875dd6e343bce6af7af66ce68f1" address="unix:///run/containerd/s/52e25ed18bfe5de89892dde9ad9aafa4416055a12fd58a6effbf873f5f7da8de" protocol=ttrpc version=3 Nov 24 00:08:43.489378 systemd[1]: Started cri-containerd-f30d533edf7d3da0300cfccc124c9576dd132875dd6e343bce6af7af66ce68f1.scope - libcontainer container f30d533edf7d3da0300cfccc124c9576dd132875dd6e343bce6af7af66ce68f1. Nov 24 00:08:43.541913 systemd-networkd[1838]: cali5b33f8aa480: Gained IPv6LL Nov 24 00:08:43.601433 containerd[1972]: time="2025-11-24T00:08:43.601275567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bqwch,Uid:348cf778-5f5c-4d14-8753-45e0fb5f1d98,Namespace:kube-system,Attempt:0,} returns sandbox id \"90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2\"" Nov 24 00:08:43.635913 containerd[1972]: time="2025-11-24T00:08:43.635684982Z" level=info msg="CreateContainer within sandbox \"90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 00:08:43.640969 containerd[1972]: time="2025-11-24T00:08:43.640920448Z" level=info msg="StartContainer for \"f30d533edf7d3da0300cfccc124c9576dd132875dd6e343bce6af7af66ce68f1\" returns successfully" Nov 24 00:08:43.662046 containerd[1972]: time="2025-11-24T00:08:43.660776014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc86c6ddc-jzjm6,Uid:63a82b4c-a5db-46d5-9bde-8b4be9966835,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"dea9420bee0681b83195dc8aef5048215db5839fc9c24c061b9e3fcd8ac25ffa\"" Nov 24 00:08:43.671590 containerd[1972]: time="2025-11-24T00:08:43.670302818Z" level=info msg="Container 2d8e00f71eec542827b9b1928b72aea17a9e5b2168af61a8fda0051385265dc5: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:08:43.678057 containerd[1972]: time="2025-11-24T00:08:43.676298360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:08:43.724394 containerd[1972]: time="2025-11-24T00:08:43.724253094Z" level=info msg="CreateContainer within sandbox \"90023154f36cd14ad664f4b57deb8020267debb3387566974a32b3dbfdccb3b2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2d8e00f71eec542827b9b1928b72aea17a9e5b2168af61a8fda0051385265dc5\"" Nov 24 00:08:43.745266 containerd[1972]: time="2025-11-24T00:08:43.744993504Z" level=info msg="StartContainer for \"2d8e00f71eec542827b9b1928b72aea17a9e5b2168af61a8fda0051385265dc5\"" Nov 24 00:08:43.756603 containerd[1972]: time="2025-11-24T00:08:43.753460863Z" level=info msg="connecting to shim 2d8e00f71eec542827b9b1928b72aea17a9e5b2168af61a8fda0051385265dc5" address="unix:///run/containerd/s/3b9f09c35b3d2b17120fcf319925648318f3a99019f6aac3acad52da09173b30" protocol=ttrpc version=3 Nov 24 00:08:43.770605 containerd[1972]: time="2025-11-24T00:08:43.767707114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jqnrx,Uid:18d54b97-5424-4119-892c-ebd148db0571,Namespace:calico-system,Attempt:0,} returns sandbox id \"9350cc61de7f419370be6c05a4b0c7bd72cf84ee54fdd8f5eabe6bad2d64b485\"" Nov 24 00:08:43.799168 systemd-networkd[1838]: cali64481a8f21a: Gained IPv6LL Nov 24 00:08:43.832846 systemd[1]: Started cri-containerd-2d8e00f71eec542827b9b1928b72aea17a9e5b2168af61a8fda0051385265dc5.scope - libcontainer container 2d8e00f71eec542827b9b1928b72aea17a9e5b2168af61a8fda0051385265dc5. Nov 24 00:08:43.850126 kubelet[3316]: E1124 00:08:43.850007 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-857d84d84d-ncvx2" podUID="293f9213-9ce6-465e-8d91-13e61a8f35a0" Nov 24 00:08:43.870773 kubelet[3316]: I1124 00:08:43.869926 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-624q9" podStartSLOduration=46.865875039 podStartE2EDuration="46.865875039s" podCreationTimestamp="2025-11-24 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:08:43.865152576 +0000 UTC m=+51.855175241" watchObservedRunningTime="2025-11-24 00:08:43.865875039 +0000 UTC m=+51.855897675" Nov 24 00:08:43.952363 containerd[1972]: time="2025-11-24T00:08:43.951369067Z" level=info msg="StartContainer for \"2d8e00f71eec542827b9b1928b72aea17a9e5b2168af61a8fda0051385265dc5\" returns successfully" Nov 24 00:08:44.023900 containerd[1972]: time="2025-11-24T00:08:44.023839620Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:08:44.029322 containerd[1972]: time="2025-11-24T00:08:44.027880563Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:08:44.029322 containerd[1972]: time="2025-11-24T00:08:44.027986006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:08:44.032575 kubelet[3316]: E1124 00:08:44.028250 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:08:44.032575 kubelet[3316]: E1124 00:08:44.028329 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:08:44.032575 kubelet[3316]: E1124 00:08:44.029131 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cx9h5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cc86c6ddc-jzjm6_calico-apiserver(63a82b4c-a5db-46d5-9bde-8b4be9966835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:08:44.032843 containerd[1972]: time="2025-11-24T00:08:44.032017546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:08:44.032894 kubelet[3316]: E1124 00:08:44.032770 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jzjm6" podUID="63a82b4c-a5db-46d5-9bde-8b4be9966835" Nov 24 00:08:44.240183 containerd[1972]: time="2025-11-24T00:08:44.240031125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc86c6ddc-jtm28,Uid:fe389aaa-291c-4fa0-a06f-e4820906cbf6,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:08:44.284140 containerd[1972]: time="2025-11-24T00:08:44.283366300Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:08:44.286125 containerd[1972]: time="2025-11-24T00:08:44.286056276Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:08:44.286358 containerd[1972]: time="2025-11-24T00:08:44.286090065Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:08:44.286686 kubelet[3316]: E1124 00:08:44.286604 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:08:44.286686 kubelet[3316]: E1124 00:08:44.286671 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:08:44.287224 kubelet[3316]: E1124 00:08:44.286885 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x9s7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jqnrx_calico-system(18d54b97-5424-4119-892c-ebd148db0571): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:08:44.288203 kubelet[3316]: E1124 00:08:44.288121 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jqnrx" podUID="18d54b97-5424-4119-892c-ebd148db0571" Nov 24 00:08:44.438776 systemd-networkd[1838]: calie42b2bea121: Gained IPv6LL Nov 24 00:08:44.489845 systemd-networkd[1838]: cali4543e97ace5: Link UP Nov 24 00:08:44.490203 systemd-networkd[1838]: cali4543e97ace5: Gained carrier Nov 24 00:08:44.536227 containerd[1972]: 2025-11-24 00:08:44.352 [INFO][5504] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jtm28-eth0 calico-apiserver-7cc86c6ddc- calico-apiserver fe389aaa-291c-4fa0-a06f-e4820906cbf6 868 0 2025-11-24 00:08:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cc86c6ddc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-87 calico-apiserver-7cc86c6ddc-jtm28 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4543e97ace5 [] [] }} ContainerID="001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570" Namespace="calico-apiserver" Pod="calico-apiserver-7cc86c6ddc-jtm28" WorkloadEndpoint="ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jtm28-" Nov 24 00:08:44.536227 containerd[1972]: 2025-11-24 00:08:44.354 [INFO][5504] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570" Namespace="calico-apiserver" Pod="calico-apiserver-7cc86c6ddc-jtm28" WorkloadEndpoint="ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jtm28-eth0" Nov 24 00:08:44.536227 containerd[1972]: 2025-11-24 00:08:44.415 [INFO][5517] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570" HandleID="k8s-pod-network.001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570" Workload="ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jtm28-eth0" Nov 24 00:08:44.536227 containerd[1972]: 2025-11-24 00:08:44.417 [INFO][5517] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570" HandleID="k8s-pod-network.001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570" Workload="ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jtm28-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000364f40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-87", "pod":"calico-apiserver-7cc86c6ddc-jtm28", "timestamp":"2025-11-24 00:08:44.415870003 +0000 UTC"}, Hostname:"ip-172-31-16-87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:08:44.536227 containerd[1972]: 2025-11-24 00:08:44.417 [INFO][5517] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:08:44.536227 containerd[1972]: 2025-11-24 00:08:44.417 [INFO][5517] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:08:44.536227 containerd[1972]: 2025-11-24 00:08:44.417 [INFO][5517] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-87' Nov 24 00:08:44.536227 containerd[1972]: 2025-11-24 00:08:44.429 [INFO][5517] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570" host="ip-172-31-16-87" Nov 24 00:08:44.536227 containerd[1972]: 2025-11-24 00:08:44.436 [INFO][5517] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-87" Nov 24 00:08:44.536227 containerd[1972]: 2025-11-24 00:08:44.448 [INFO][5517] ipam/ipam.go 511: Trying affinity for 192.168.25.192/26 host="ip-172-31-16-87" Nov 24 00:08:44.536227 containerd[1972]: 2025-11-24 00:08:44.451 [INFO][5517] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.192/26 host="ip-172-31-16-87" Nov 24 00:08:44.536227 containerd[1972]: 2025-11-24 00:08:44.455 [INFO][5517] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.192/26 host="ip-172-31-16-87" Nov 24 00:08:44.536227 containerd[1972]: 2025-11-24 00:08:44.455 [INFO][5517] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.192/26 handle="k8s-pod-network.001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570" host="ip-172-31-16-87" Nov 24 00:08:44.536227 containerd[1972]: 2025-11-24 00:08:44.457 [INFO][5517] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570 Nov 24 00:08:44.536227 containerd[1972]: 2025-11-24 00:08:44.464 [INFO][5517] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.192/26 handle="k8s-pod-network.001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570" host="ip-172-31-16-87" Nov 24 00:08:44.536227 containerd[1972]: 2025-11-24 00:08:44.477 [INFO][5517] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.200/26] block=192.168.25.192/26 handle="k8s-pod-network.001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570" host="ip-172-31-16-87" Nov 24 00:08:44.536227 containerd[1972]: 2025-11-24 00:08:44.477 [INFO][5517] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.200/26] handle="k8s-pod-network.001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570" host="ip-172-31-16-87" Nov 24 00:08:44.536227 containerd[1972]: 2025-11-24 00:08:44.477 [INFO][5517] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:08:44.536227 containerd[1972]: 2025-11-24 00:08:44.478 [INFO][5517] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.200/26] IPv6=[] ContainerID="001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570" HandleID="k8s-pod-network.001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570" Workload="ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jtm28-eth0" Nov 24 00:08:44.537199 containerd[1972]: 2025-11-24 00:08:44.482 [INFO][5504] cni-plugin/k8s.go 418: Populated endpoint ContainerID="001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570" Namespace="calico-apiserver" Pod="calico-apiserver-7cc86c6ddc-jtm28" WorkloadEndpoint="ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jtm28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jtm28-eth0", GenerateName:"calico-apiserver-7cc86c6ddc-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe389aaa-291c-4fa0-a06f-e4820906cbf6", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 8, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cc86c6ddc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-87", ContainerID:"", Pod:"calico-apiserver-7cc86c6ddc-jtm28", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4543e97ace5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:08:44.537199 containerd[1972]: 2025-11-24 00:08:44.482 [INFO][5504] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.200/32] ContainerID="001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570" Namespace="calico-apiserver" Pod="calico-apiserver-7cc86c6ddc-jtm28" WorkloadEndpoint="ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jtm28-eth0" Nov 24 00:08:44.537199 containerd[1972]: 2025-11-24 00:08:44.483 [INFO][5504] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4543e97ace5 ContainerID="001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570" Namespace="calico-apiserver" Pod="calico-apiserver-7cc86c6ddc-jtm28" WorkloadEndpoint="ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jtm28-eth0" Nov 24 00:08:44.537199 containerd[1972]: 2025-11-24 00:08:44.487 [INFO][5504] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570" Namespace="calico-apiserver" Pod="calico-apiserver-7cc86c6ddc-jtm28" WorkloadEndpoint="ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jtm28-eth0" Nov 24 00:08:44.537199 containerd[1972]: 2025-11-24 00:08:44.487 [INFO][5504] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570" Namespace="calico-apiserver" Pod="calico-apiserver-7cc86c6ddc-jtm28" WorkloadEndpoint="ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jtm28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jtm28-eth0", GenerateName:"calico-apiserver-7cc86c6ddc-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe389aaa-291c-4fa0-a06f-e4820906cbf6", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 8, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cc86c6ddc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-87", ContainerID:"001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570", Pod:"calico-apiserver-7cc86c6ddc-jtm28", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4543e97ace5", MAC:"72:d5:52:65:cb:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:08:44.537199 containerd[1972]: 2025-11-24 00:08:44.528 [INFO][5504] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570" Namespace="calico-apiserver" Pod="calico-apiserver-7cc86c6ddc-jtm28" WorkloadEndpoint="ip--172--31--16--87-k8s-calico--apiserver--7cc86c6ddc--jtm28-eth0" Nov 24 00:08:44.617023 containerd[1972]: time="2025-11-24T00:08:44.616945968Z" level=info msg="connecting to shim 001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570" address="unix:///run/containerd/s/c6b5e912e888420ecabacf179d6f171e1d386891b17df96b1b82318281c44470" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:08:44.631780 systemd-networkd[1838]: calia19317b3c91: Gained IPv6LL Nov 24 00:08:44.709995 systemd[1]: Started cri-containerd-001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570.scope - libcontainer container 001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570. Nov 24 00:08:44.757862 systemd-networkd[1838]: calic0d6e99c2af: Gained IPv6LL Nov 24 00:08:44.793076 systemd[1]: Started sshd@9-172.31.16.87:22-139.178.68.195:45268.service - OpenSSH per-connection server daemon (139.178.68.195:45268). Nov 24 00:08:44.872637 kubelet[3316]: E1124 00:08:44.872361 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jqnrx" podUID="18d54b97-5424-4119-892c-ebd148db0571" Nov 24 00:08:44.875753 kubelet[3316]: E1124 00:08:44.875607 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jzjm6" podUID="63a82b4c-a5db-46d5-9bde-8b4be9966835" Nov 24 00:08:44.878901 containerd[1972]: time="2025-11-24T00:08:44.877940387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cc86c6ddc-jtm28,Uid:fe389aaa-291c-4fa0-a06f-e4820906cbf6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"001c519ccadcd0f0b00c235a9bc7ba3c6dff884c3982f65c5a4497d1b116b570\"" Nov 24 00:08:44.890590 containerd[1972]: time="2025-11-24T00:08:44.889113491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:08:45.017961 kubelet[3316]: I1124 00:08:45.017885 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bqwch" podStartSLOduration=48.017859771 podStartE2EDuration="48.017859771s" podCreationTimestamp="2025-11-24 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:08:44.992871911 +0000 UTC m=+52.982894546" watchObservedRunningTime="2025-11-24 00:08:45.017859771 +0000 UTC m=+53.007882412" Nov 24 00:08:45.079271 sshd[5575]: Accepted publickey for core from 139.178.68.195 port 45268 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:08:45.086611 sshd-session[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:08:45.097723 systemd-logind[1954]: New session 10 of user core. Nov 24 00:08:45.102834 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 24 00:08:45.212135 containerd[1972]: time="2025-11-24T00:08:45.212075571Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:08:45.214771 containerd[1972]: time="2025-11-24T00:08:45.214658567Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:08:45.214935 containerd[1972]: time="2025-11-24T00:08:45.214702869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:08:45.215445 kubelet[3316]: E1124 00:08:45.215232 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:08:45.215445 kubelet[3316]: E1124 00:08:45.215299 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:08:45.216386 kubelet[3316]: E1124 00:08:45.216110 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5n9pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cc86c6ddc-jtm28_calico-apiserver(fe389aaa-291c-4fa0-a06f-e4820906cbf6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:08:45.217612 kubelet[3316]: E1124 00:08:45.217542 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jtm28" podUID="fe389aaa-291c-4fa0-a06f-e4820906cbf6" Nov 24 00:08:45.872743 kubelet[3316]: E1124 00:08:45.871936 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jtm28" podUID="fe389aaa-291c-4fa0-a06f-e4820906cbf6" Nov 24 00:08:46.233963 sshd[5590]: Connection closed by 139.178.68.195 port 45268 Nov 24 00:08:46.234842 sshd-session[5575]: pam_unix(sshd:session): session closed for user core Nov 24 00:08:46.250603 systemd[1]: sshd@9-172.31.16.87:22-139.178.68.195:45268.service: Deactivated successfully. Nov 24 00:08:46.255944 systemd[1]: session-10.scope: Deactivated successfully. Nov 24 00:08:46.258399 systemd-logind[1954]: Session 10 logged out. Waiting for processes to exit. Nov 24 00:08:46.263059 systemd-logind[1954]: Removed session 10. Nov 24 00:08:46.421823 systemd-networkd[1838]: cali4543e97ace5: Gained IPv6LL Nov 24 00:08:46.876196 kubelet[3316]: E1124 00:08:46.875936 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jtm28" podUID="fe389aaa-291c-4fa0-a06f-e4820906cbf6" Nov 24 00:08:49.238079 ntpd[2162]: Listen normally on 6 vxlan.calico 192.168.25.192:123 Nov 24 00:08:49.238795 ntpd[2162]: 24 Nov 00:08:49 ntpd[2162]: Listen normally on 6 vxlan.calico 192.168.25.192:123 Nov 24 00:08:49.238795 ntpd[2162]: 24 Nov 00:08:49 ntpd[2162]: Listen normally on 7 cali605de2d3b2e [fe80::ecee:eeff:feee:eeee%4]:123 Nov 24 00:08:49.238795 ntpd[2162]: 24 Nov 00:08:49 ntpd[2162]: Listen normally on 8 vxlan.calico [fe80::640c:6bff:fe2a:8baf%5]:123 Nov 24 00:08:49.238795 ntpd[2162]: 24 Nov 00:08:49 ntpd[2162]: Listen normally on 9 cali8ab4c5c0900 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 24 00:08:49.238795 ntpd[2162]: 24 Nov 00:08:49 ntpd[2162]: Listen normally on 10 cali5b33f8aa480 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 24 00:08:49.238795 ntpd[2162]: 24 Nov 00:08:49 ntpd[2162]: Listen normally on 11 cali64481a8f21a [fe80::ecee:eeff:feee:eeee%10]:123 Nov 24 00:08:49.238795 ntpd[2162]: 24 Nov 00:08:49 ntpd[2162]: Listen normally on 12 calia19317b3c91 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 24 00:08:49.238795 ntpd[2162]: 24 Nov 00:08:49 ntpd[2162]: Listen normally on 13 calie42b2bea121 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 24 00:08:49.238795 ntpd[2162]: 24 Nov 00:08:49 ntpd[2162]: Listen normally on 14 calic0d6e99c2af [fe80::ecee:eeff:feee:eeee%13]:123 Nov 24 00:08:49.238795 ntpd[2162]: 24 Nov 00:08:49 ntpd[2162]: Listen normally on 15 cali4543e97ace5 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 24 00:08:49.238157 ntpd[2162]: Listen normally on 7 cali605de2d3b2e [fe80::ecee:eeff:feee:eeee%4]:123 Nov 24 00:08:49.238183 ntpd[2162]: Listen normally on 8 vxlan.calico [fe80::640c:6bff:fe2a:8baf%5]:123 Nov 24 00:08:49.238202 ntpd[2162]: Listen normally on 9 cali8ab4c5c0900 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 24 00:08:49.238222 ntpd[2162]: Listen normally on 10 cali5b33f8aa480 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 24 00:08:49.238240 ntpd[2162]: Listen normally on 11 cali64481a8f21a [fe80::ecee:eeff:feee:eeee%10]:123 Nov 24 00:08:49.238262 ntpd[2162]: Listen normally on 12 calia19317b3c91 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 24 00:08:49.238282 ntpd[2162]: Listen normally on 13 calie42b2bea121 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 24 00:08:49.238303 ntpd[2162]: Listen normally on 14 calic0d6e99c2af [fe80::ecee:eeff:feee:eeee%13]:123 Nov 24 00:08:49.238321 ntpd[2162]: Listen normally on 15 cali4543e97ace5 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 24 00:08:51.270548 systemd[1]: Started sshd@10-172.31.16.87:22-139.178.68.195:39302.service - OpenSSH per-connection server daemon (139.178.68.195:39302). Nov 24 00:08:51.461180 sshd[5625]: Accepted publickey for core from 139.178.68.195 port 39302 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:08:51.463031 sshd-session[5625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:08:51.469895 systemd-logind[1954]: New session 11 of user core. Nov 24 00:08:51.480909 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 24 00:08:51.711256 sshd[5628]: Connection closed by 139.178.68.195 port 39302 Nov 24 00:08:51.712891 sshd-session[5625]: pam_unix(sshd:session): session closed for user core Nov 24 00:08:51.719120 systemd-logind[1954]: Session 11 logged out. Waiting for processes to exit. Nov 24 00:08:51.720092 systemd[1]: sshd@10-172.31.16.87:22-139.178.68.195:39302.service: Deactivated successfully. Nov 24 00:08:51.723261 systemd[1]: session-11.scope: Deactivated successfully. Nov 24 00:08:51.726372 systemd-logind[1954]: Removed session 11. Nov 24 00:08:55.238614 containerd[1972]: time="2025-11-24T00:08:55.238462517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:08:55.493534 containerd[1972]: time="2025-11-24T00:08:55.493227304Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:08:55.496032 containerd[1972]: time="2025-11-24T00:08:55.495962665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:08:55.496230 containerd[1972]: time="2025-11-24T00:08:55.496093690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:08:55.496469 kubelet[3316]: E1124 00:08:55.496345 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:08:55.497016 kubelet[3316]: E1124 00:08:55.496482 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:08:55.497016 kubelet[3316]: E1124 00:08:55.496887 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:de3c06f5af6f4d53b271c97ff9b037fd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-68g7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dfdd4b85d-wmqzw_calico-system(2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:08:55.500417 containerd[1972]: time="2025-11-24T00:08:55.500365222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:08:55.772096 containerd[1972]: time="2025-11-24T00:08:55.772011732Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:08:55.774971 containerd[1972]: time="2025-11-24T00:08:55.774904598Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:08:55.775926 containerd[1972]: time="2025-11-24T00:08:55.775005392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:08:55.776006 kubelet[3316]: E1124 00:08:55.775188 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:08:55.776006 kubelet[3316]: E1124 00:08:55.775246 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:08:55.776373 kubelet[3316]: E1124 00:08:55.776256 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68g7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dfdd4b85d-wmqzw_calico-system(2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:08:55.777586 kubelet[3316]: E1124 00:08:55.777507 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dfdd4b85d-wmqzw" podUID="2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1" Nov 24 00:08:56.242996 containerd[1972]: time="2025-11-24T00:08:56.242847164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:08:56.483123 containerd[1972]: time="2025-11-24T00:08:56.483066563Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:08:56.485553 containerd[1972]: time="2025-11-24T00:08:56.485472355Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:08:56.486127 containerd[1972]: time="2025-11-24T00:08:56.485753106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:08:56.486328 kubelet[3316]: E1124 00:08:56.486171 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:08:56.486328 kubelet[3316]: E1124 00:08:56.486232 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:08:56.487137 kubelet[3316]: E1124 00:08:56.487057 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvb8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-44qlh_calico-system(a7f2741e-c2a8-4e97-9679-431279b978f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:08:56.489908 containerd[1972]: time="2025-11-24T00:08:56.489359254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:08:56.736532 containerd[1972]: time="2025-11-24T00:08:56.735440955Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:08:56.739104 containerd[1972]: time="2025-11-24T00:08:56.738967537Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:08:56.739104 containerd[1972]: time="2025-11-24T00:08:56.739029970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:08:56.739360 kubelet[3316]: E1124 00:08:56.739317 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:08:56.740683 kubelet[3316]: E1124 00:08:56.739373 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:08:56.740683 kubelet[3316]: E1124 00:08:56.739701 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvb8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-44qlh_calico-system(a7f2741e-c2a8-4e97-9679-431279b978f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:08:56.741448 kubelet[3316]: E1124 00:08:56.741402 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-44qlh" podUID="a7f2741e-c2a8-4e97-9679-431279b978f1" Nov 24 00:08:56.752955 systemd[1]: Started sshd@11-172.31.16.87:22-139.178.68.195:39314.service - OpenSSH per-connection server daemon (139.178.68.195:39314). Nov 24 00:08:56.949792 sshd[5645]: Accepted publickey for core from 139.178.68.195 port 39314 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:08:56.951417 sshd-session[5645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:08:56.957393 systemd-logind[1954]: New session 12 of user core. Nov 24 00:08:56.963980 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 24 00:08:57.216121 sshd[5648]: Connection closed by 139.178.68.195 port 39314 Nov 24 00:08:57.216811 sshd-session[5645]: pam_unix(sshd:session): session closed for user core Nov 24 00:08:57.225088 systemd-logind[1954]: Session 12 logged out. Waiting for processes to exit. Nov 24 00:08:57.226395 systemd[1]: sshd@11-172.31.16.87:22-139.178.68.195:39314.service: Deactivated successfully. Nov 24 00:08:57.229810 systemd[1]: session-12.scope: Deactivated successfully. Nov 24 00:08:57.232157 systemd-logind[1954]: Removed session 12. Nov 24 00:08:57.252766 containerd[1972]: time="2025-11-24T00:08:57.252659618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:08:57.292854 systemd[1]: Started sshd@12-172.31.16.87:22-139.178.68.195:39328.service - OpenSSH per-connection server daemon (139.178.68.195:39328). Nov 24 00:08:57.503438 sshd[5661]: Accepted publickey for core from 139.178.68.195 port 39328 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:08:57.505077 sshd-session[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:08:57.512028 systemd-logind[1954]: New session 13 of user core. Nov 24 00:08:57.521860 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 24 00:08:57.525808 containerd[1972]: time="2025-11-24T00:08:57.525655158Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:08:57.529179 containerd[1972]: time="2025-11-24T00:08:57.528910102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:08:57.529179 containerd[1972]: time="2025-11-24T00:08:57.528907566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:08:57.529396 kubelet[3316]: E1124 00:08:57.529268 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:08:57.529396 kubelet[3316]: E1124 00:08:57.529330 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:08:57.529582 kubelet[3316]: E1124 00:08:57.529524 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cx9h5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cc86c6ddc-jzjm6_calico-apiserver(63a82b4c-a5db-46d5-9bde-8b4be9966835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:08:57.531436 kubelet[3316]: E1124 00:08:57.531370 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jzjm6" podUID="63a82b4c-a5db-46d5-9bde-8b4be9966835" Nov 24 00:08:57.783402 sshd[5664]: Connection closed by 139.178.68.195 port 39328 Nov 24 00:08:57.786282 sshd-session[5661]: pam_unix(sshd:session): session closed for user core Nov 24 00:08:57.798302 systemd-logind[1954]: Session 13 logged out. Waiting for processes to exit. Nov 24 00:08:57.799150 systemd[1]: sshd@12-172.31.16.87:22-139.178.68.195:39328.service: Deactivated successfully. Nov 24 00:08:57.806786 systemd[1]: session-13.scope: Deactivated successfully. Nov 24 00:08:57.832193 systemd-logind[1954]: Removed session 13. Nov 24 00:08:57.836228 systemd[1]: Started sshd@13-172.31.16.87:22-139.178.68.195:39342.service - OpenSSH per-connection server daemon (139.178.68.195:39342). Nov 24 00:08:58.013412 sshd[5673]: Accepted publickey for core from 139.178.68.195 port 39342 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:08:58.015130 sshd-session[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:08:58.021650 systemd-logind[1954]: New session 14 of user core. Nov 24 00:08:58.029911 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 24 00:08:58.238658 sshd[5676]: Connection closed by 139.178.68.195 port 39342 Nov 24 00:08:58.240219 sshd-session[5673]: pam_unix(sshd:session): session closed for user core Nov 24 00:08:58.246832 containerd[1972]: time="2025-11-24T00:08:58.244916476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:08:58.253072 systemd[1]: sshd@13-172.31.16.87:22-139.178.68.195:39342.service: Deactivated successfully. Nov 24 00:08:58.259339 systemd[1]: session-14.scope: Deactivated successfully. Nov 24 00:08:58.262787 systemd-logind[1954]: Session 14 logged out. Waiting for processes to exit. Nov 24 00:08:58.265455 systemd-logind[1954]: Removed session 14. Nov 24 00:08:58.493608 containerd[1972]: time="2025-11-24T00:08:58.493280945Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:08:58.495990 containerd[1972]: time="2025-11-24T00:08:58.495907973Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:08:58.496178 containerd[1972]: time="2025-11-24T00:08:58.495953237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:08:58.496360 kubelet[3316]: E1124 00:08:58.496308 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:08:58.497111 kubelet[3316]: E1124 00:08:58.496384 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:08:58.497111 kubelet[3316]: E1124 00:08:58.496769 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x9s7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jqnrx_calico-system(18d54b97-5424-4119-892c-ebd148db0571): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:08:58.500039 kubelet[3316]: E1124 00:08:58.499960 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jqnrx" podUID="18d54b97-5424-4119-892c-ebd148db0571" Nov 24 00:08:58.508029 containerd[1972]: time="2025-11-24T00:08:58.507986032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:08:58.789242 containerd[1972]: time="2025-11-24T00:08:58.789176693Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:08:58.791477 containerd[1972]: time="2025-11-24T00:08:58.791413134Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:08:58.791692 containerd[1972]: time="2025-11-24T00:08:58.791440173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:08:58.792057 kubelet[3316]: E1124 00:08:58.792006 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:08:58.792057 kubelet[3316]: E1124 00:08:58.792058 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:08:58.794364 kubelet[3316]: E1124 00:08:58.794275 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hqp8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-857d84d84d-ncvx2_calico-system(293f9213-9ce6-465e-8d91-13e61a8f35a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:08:58.795825 kubelet[3316]: E1124 00:08:58.795740 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-857d84d84d-ncvx2" podUID="293f9213-9ce6-465e-8d91-13e61a8f35a0" Nov 24 00:08:59.239015 containerd[1972]: time="2025-11-24T00:08:59.238850712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:08:59.466485 containerd[1972]: time="2025-11-24T00:08:59.466400998Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:08:59.469068 containerd[1972]: time="2025-11-24T00:08:59.468975730Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:08:59.469232 containerd[1972]: time="2025-11-24T00:08:59.469090889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:08:59.469319 kubelet[3316]: E1124 00:08:59.469276 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:08:59.469396 kubelet[3316]: E1124 00:08:59.469331 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:08:59.469585 kubelet[3316]: E1124 00:08:59.469516 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5n9pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cc86c6ddc-jtm28_calico-apiserver(fe389aaa-291c-4fa0-a06f-e4820906cbf6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:08:59.470935 kubelet[3316]: E1124 00:08:59.470862 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jtm28" podUID="fe389aaa-291c-4fa0-a06f-e4820906cbf6" Nov 24 00:09:03.290714 systemd[1]: Started sshd@14-172.31.16.87:22-139.178.68.195:34242.service - OpenSSH per-connection server daemon (139.178.68.195:34242). Nov 24 00:09:03.562621 sshd[5700]: Accepted publickey for core from 139.178.68.195 port 34242 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:09:03.565391 sshd-session[5700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:09:03.584386 systemd-logind[1954]: New session 15 of user core. Nov 24 00:09:03.589836 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 24 00:09:03.964695 sshd[5703]: Connection closed by 139.178.68.195 port 34242 Nov 24 00:09:03.966116 sshd-session[5700]: pam_unix(sshd:session): session closed for user core Nov 24 00:09:03.977749 systemd[1]: sshd@14-172.31.16.87:22-139.178.68.195:34242.service: Deactivated successfully. Nov 24 00:09:03.983377 systemd[1]: session-15.scope: Deactivated successfully. Nov 24 00:09:03.986004 systemd-logind[1954]: Session 15 logged out. Waiting for processes to exit. Nov 24 00:09:03.988364 systemd-logind[1954]: Removed session 15. Nov 24 00:09:07.243401 kubelet[3316]: E1124 00:09:07.243316 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-44qlh" podUID="a7f2741e-c2a8-4e97-9679-431279b978f1" Nov 24 00:09:08.239753 kubelet[3316]: E1124 00:09:08.239215 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jzjm6" podUID="63a82b4c-a5db-46d5-9bde-8b4be9966835" Nov 24 00:09:09.006592 systemd[1]: Started sshd@15-172.31.16.87:22-139.178.68.195:34246.service - OpenSSH per-connection server daemon (139.178.68.195:34246). Nov 24 00:09:09.247398 sshd[5717]: Accepted publickey for core from 139.178.68.195 port 34246 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:09:09.249705 sshd-session[5717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:09:09.257083 systemd-logind[1954]: New session 16 of user core. Nov 24 00:09:09.264637 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 24 00:09:09.550753 sshd[5720]: Connection closed by 139.178.68.195 port 34246 Nov 24 00:09:09.552805 sshd-session[5717]: pam_unix(sshd:session): session closed for user core Nov 24 00:09:09.560948 systemd[1]: sshd@15-172.31.16.87:22-139.178.68.195:34246.service: Deactivated successfully. Nov 24 00:09:09.564539 systemd[1]: session-16.scope: Deactivated successfully. Nov 24 00:09:09.566369 systemd-logind[1954]: Session 16 logged out. Waiting for processes to exit. Nov 24 00:09:09.569269 systemd-logind[1954]: Removed session 16. Nov 24 00:09:10.242208 kubelet[3316]: E1124 00:09:10.241122 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jtm28" podUID="fe389aaa-291c-4fa0-a06f-e4820906cbf6" Nov 24 00:09:10.244880 kubelet[3316]: E1124 00:09:10.244746 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dfdd4b85d-wmqzw" podUID="2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1" Nov 24 00:09:12.240853 kubelet[3316]: E1124 00:09:12.240541 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-857d84d84d-ncvx2" podUID="293f9213-9ce6-465e-8d91-13e61a8f35a0" Nov 24 00:09:13.241605 kubelet[3316]: E1124 00:09:13.241316 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jqnrx" podUID="18d54b97-5424-4119-892c-ebd148db0571" Nov 24 00:09:14.592626 systemd[1]: Started sshd@16-172.31.16.87:22-139.178.68.195:49368.service - OpenSSH per-connection server daemon (139.178.68.195:49368). Nov 24 00:09:14.853588 sshd[5760]: Accepted publickey for core from 139.178.68.195 port 49368 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:09:14.856665 sshd-session[5760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:09:14.869739 systemd-logind[1954]: New session 17 of user core. Nov 24 00:09:14.874843 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 24 00:09:15.195900 sshd[5763]: Connection closed by 139.178.68.195 port 49368 Nov 24 00:09:15.197311 sshd-session[5760]: pam_unix(sshd:session): session closed for user core Nov 24 00:09:15.205860 systemd-logind[1954]: Session 17 logged out. Waiting for processes to exit. Nov 24 00:09:15.206101 systemd[1]: sshd@16-172.31.16.87:22-139.178.68.195:49368.service: Deactivated successfully. Nov 24 00:09:15.210399 systemd[1]: session-17.scope: Deactivated successfully. Nov 24 00:09:15.214357 systemd-logind[1954]: Removed session 17. Nov 24 00:09:20.235310 systemd[1]: Started sshd@17-172.31.16.87:22-139.178.68.195:45332.service - OpenSSH per-connection server daemon (139.178.68.195:45332). Nov 24 00:09:20.258251 containerd[1972]: time="2025-11-24T00:09:20.257447412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:09:20.525372 containerd[1972]: time="2025-11-24T00:09:20.525083101Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:09:20.527548 containerd[1972]: time="2025-11-24T00:09:20.527294061Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:09:20.527548 containerd[1972]: time="2025-11-24T00:09:20.527396302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:09:20.528872 kubelet[3316]: E1124 00:09:20.528809 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:09:20.530583 kubelet[3316]: E1124 00:09:20.528882 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:09:20.530583 kubelet[3316]: E1124 00:09:20.529110 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cx9h5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cc86c6ddc-jzjm6_calico-apiserver(63a82b4c-a5db-46d5-9bde-8b4be9966835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:09:20.531549 kubelet[3316]: E1124 00:09:20.530767 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jzjm6" podUID="63a82b4c-a5db-46d5-9bde-8b4be9966835" Nov 24 00:09:20.533987 sshd[5777]: Accepted publickey for core from 139.178.68.195 port 45332 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:09:20.540074 sshd-session[5777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:09:20.551132 systemd-logind[1954]: New session 18 of user core. Nov 24 00:09:20.559864 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 24 00:09:20.902110 sshd[5787]: Connection closed by 139.178.68.195 port 45332 Nov 24 00:09:20.902987 sshd-session[5777]: pam_unix(sshd:session): session closed for user core Nov 24 00:09:20.913479 systemd-logind[1954]: Session 18 logged out. Waiting for processes to exit. Nov 24 00:09:20.914252 systemd[1]: sshd@17-172.31.16.87:22-139.178.68.195:45332.service: Deactivated successfully. Nov 24 00:09:20.917092 systemd[1]: session-18.scope: Deactivated successfully. Nov 24 00:09:20.922456 systemd-logind[1954]: Removed session 18. Nov 24 00:09:20.946686 systemd[1]: Started sshd@18-172.31.16.87:22-139.178.68.195:45348.service - OpenSSH per-connection server daemon (139.178.68.195:45348). Nov 24 00:09:21.141289 sshd[5799]: Accepted publickey for core from 139.178.68.195 port 45348 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:09:21.144305 sshd-session[5799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:09:21.151157 systemd-logind[1954]: New session 19 of user core. Nov 24 00:09:21.156828 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 24 00:09:21.920900 sshd[5802]: Connection closed by 139.178.68.195 port 45348 Nov 24 00:09:21.923736 sshd-session[5799]: pam_unix(sshd:session): session closed for user core Nov 24 00:09:21.929947 systemd[1]: sshd@18-172.31.16.87:22-139.178.68.195:45348.service: Deactivated successfully. Nov 24 00:09:21.938441 systemd[1]: session-19.scope: Deactivated successfully. Nov 24 00:09:21.941278 systemd-logind[1954]: Session 19 logged out. Waiting for processes to exit. Nov 24 00:09:21.964480 systemd-logind[1954]: Removed session 19. Nov 24 00:09:21.968064 systemd[1]: Started sshd@19-172.31.16.87:22-139.178.68.195:45356.service - OpenSSH per-connection server daemon (139.178.68.195:45356). Nov 24 00:09:22.236653 sshd[5812]: Accepted publickey for core from 139.178.68.195 port 45356 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:09:22.241539 sshd-session[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:09:22.250588 containerd[1972]: time="2025-11-24T00:09:22.247716629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:09:22.257113 systemd-logind[1954]: New session 20 of user core. Nov 24 00:09:22.266067 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 24 00:09:22.565985 containerd[1972]: time="2025-11-24T00:09:22.565932011Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:09:22.568832 containerd[1972]: time="2025-11-24T00:09:22.568732080Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:09:22.570743 containerd[1972]: time="2025-11-24T00:09:22.569039941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:09:22.570831 kubelet[3316]: E1124 00:09:22.569239 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:09:22.570831 kubelet[3316]: E1124 00:09:22.569296 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:09:22.570831 kubelet[3316]: E1124 00:09:22.570546 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvb8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-44qlh_calico-system(a7f2741e-c2a8-4e97-9679-431279b978f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:09:22.574804 containerd[1972]: time="2025-11-24T00:09:22.574621625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:09:22.834784 containerd[1972]: time="2025-11-24T00:09:22.834136453Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:09:22.836808 containerd[1972]: time="2025-11-24T00:09:22.836530594Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:09:22.837761 containerd[1972]: time="2025-11-24T00:09:22.836579930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:09:22.838542 kubelet[3316]: E1124 00:09:22.838459 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:09:22.838945 kubelet[3316]: E1124 00:09:22.838626 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:09:22.839626 kubelet[3316]: E1124 00:09:22.838902 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvb8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-44qlh_calico-system(a7f2741e-c2a8-4e97-9679-431279b978f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:09:22.840685 kubelet[3316]: E1124 00:09:22.840554 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-44qlh" podUID="a7f2741e-c2a8-4e97-9679-431279b978f1" Nov 24 00:09:23.606855 sshd[5815]: Connection closed by 139.178.68.195 port 45356 Nov 24 00:09:23.615653 sshd-session[5812]: pam_unix(sshd:session): session closed for user core Nov 24 00:09:23.642032 systemd[1]: sshd@19-172.31.16.87:22-139.178.68.195:45356.service: Deactivated successfully. Nov 24 00:09:23.647329 systemd[1]: session-20.scope: Deactivated successfully. Nov 24 00:09:23.651726 systemd-logind[1954]: Session 20 logged out. Waiting for processes to exit. Nov 24 00:09:23.658116 systemd[1]: Started sshd@20-172.31.16.87:22-139.178.68.195:45358.service - OpenSSH per-connection server daemon (139.178.68.195:45358). Nov 24 00:09:23.674330 systemd-logind[1954]: Removed session 20. Nov 24 00:09:23.891988 sshd[5833]: Accepted publickey for core from 139.178.68.195 port 45358 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:09:23.895710 sshd-session[5833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:09:23.903695 systemd-logind[1954]: New session 21 of user core. Nov 24 00:09:23.910829 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 24 00:09:24.246282 containerd[1972]: time="2025-11-24T00:09:24.245981029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:09:24.532064 containerd[1972]: time="2025-11-24T00:09:24.531111196Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:09:24.534173 containerd[1972]: time="2025-11-24T00:09:24.534003793Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:09:24.534663 kubelet[3316]: E1124 00:09:24.534607 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:09:24.535168 kubelet[3316]: E1124 00:09:24.534674 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:09:24.536333 kubelet[3316]: E1124 00:09:24.535685 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hqp8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-857d84d84d-ncvx2_calico-system(293f9213-9ce6-465e-8d91-13e61a8f35a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:09:24.537080 kubelet[3316]: E1124 00:09:24.536888 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-857d84d84d-ncvx2" podUID="293f9213-9ce6-465e-8d91-13e61a8f35a0" Nov 24 00:09:24.598855 containerd[1972]: time="2025-11-24T00:09:24.534143884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:09:24.598855 containerd[1972]: time="2025-11-24T00:09:24.539058801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:09:24.853415 containerd[1972]: time="2025-11-24T00:09:24.852990591Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:09:24.862832 containerd[1972]: time="2025-11-24T00:09:24.862659901Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:09:24.862832 containerd[1972]: time="2025-11-24T00:09:24.862790284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:09:24.863426 kubelet[3316]: E1124 00:09:24.863299 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:09:24.863426 kubelet[3316]: E1124 00:09:24.863382 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:09:24.864700 kubelet[3316]: E1124 00:09:24.864609 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5n9pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cc86c6ddc-jtm28_calico-apiserver(fe389aaa-291c-4fa0-a06f-e4820906cbf6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:09:24.873894 kubelet[3316]: E1124 00:09:24.873629 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jtm28" podUID="fe389aaa-291c-4fa0-a06f-e4820906cbf6" Nov 24 00:09:25.077723 sshd[5839]: Connection closed by 139.178.68.195 port 45358 Nov 24 00:09:25.079801 sshd-session[5833]: pam_unix(sshd:session): session closed for user core Nov 24 00:09:25.088242 systemd-logind[1954]: Session 21 logged out. Waiting for processes to exit. Nov 24 00:09:25.089172 systemd[1]: sshd@20-172.31.16.87:22-139.178.68.195:45358.service: Deactivated successfully. Nov 24 00:09:25.095171 systemd[1]: session-21.scope: Deactivated successfully. Nov 24 00:09:25.119662 systemd-logind[1954]: Removed session 21. Nov 24 00:09:25.121859 systemd[1]: Started sshd@21-172.31.16.87:22-139.178.68.195:45364.service - OpenSSH per-connection server daemon (139.178.68.195:45364). Nov 24 00:09:25.243627 containerd[1972]: time="2025-11-24T00:09:25.242842169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:09:25.365458 sshd[5851]: Accepted publickey for core from 139.178.68.195 port 45364 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:09:25.367774 sshd-session[5851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:09:25.377431 systemd-logind[1954]: New session 22 of user core. Nov 24 00:09:25.385846 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 24 00:09:25.512592 containerd[1972]: time="2025-11-24T00:09:25.511625742Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:09:25.513686 containerd[1972]: time="2025-11-24T00:09:25.513628959Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:09:25.513825 containerd[1972]: time="2025-11-24T00:09:25.513760795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:09:25.514109 kubelet[3316]: E1124 00:09:25.514044 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:09:25.514191 kubelet[3316]: E1124 00:09:25.514129 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:09:25.514941 kubelet[3316]: E1124 00:09:25.514341 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:de3c06f5af6f4d53b271c97ff9b037fd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-68g7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dfdd4b85d-wmqzw_calico-system(2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:09:25.517638 containerd[1972]: time="2025-11-24T00:09:25.517354872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:09:25.654451 sshd[5854]: Connection closed by 139.178.68.195 port 45364 Nov 24 00:09:25.656587 sshd-session[5851]: pam_unix(sshd:session): session closed for user core Nov 24 00:09:25.665761 systemd-logind[1954]: Session 22 logged out. Waiting for processes to exit. Nov 24 00:09:25.666277 systemd[1]: sshd@21-172.31.16.87:22-139.178.68.195:45364.service: Deactivated successfully. Nov 24 00:09:25.673006 systemd[1]: session-22.scope: Deactivated successfully. Nov 24 00:09:25.678555 systemd-logind[1954]: Removed session 22. Nov 24 00:09:25.799807 containerd[1972]: time="2025-11-24T00:09:25.798738034Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:09:25.801280 containerd[1972]: time="2025-11-24T00:09:25.800992590Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:09:25.801396 containerd[1972]: time="2025-11-24T00:09:25.801241556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:09:25.801733 kubelet[3316]: E1124 00:09:25.801583 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:09:25.801733 kubelet[3316]: E1124 00:09:25.801654 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:09:25.802104 kubelet[3316]: E1124 00:09:25.801901 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68g7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dfdd4b85d-wmqzw_calico-system(2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:09:25.803177 kubelet[3316]: E1124 00:09:25.803136 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dfdd4b85d-wmqzw" podUID="2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1" Nov 24 00:09:26.242920 containerd[1972]: time="2025-11-24T00:09:26.242868290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:09:26.528133 containerd[1972]: time="2025-11-24T00:09:26.528070393Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:09:26.530306 containerd[1972]: time="2025-11-24T00:09:26.530236613Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:09:26.531868 containerd[1972]: time="2025-11-24T00:09:26.530390802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:09:26.532150 kubelet[3316]: E1124 00:09:26.532086 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:09:26.532234 kubelet[3316]: E1124 00:09:26.532170 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:09:26.539081 kubelet[3316]: E1124 00:09:26.532887 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x9s7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jqnrx_calico-system(18d54b97-5424-4119-892c-ebd148db0571): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:09:26.540282 kubelet[3316]: E1124 00:09:26.540220 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jqnrx" podUID="18d54b97-5424-4119-892c-ebd148db0571" Nov 24 00:09:30.695902 systemd[1]: Started sshd@22-172.31.16.87:22-139.178.68.195:49792.service - OpenSSH per-connection server daemon (139.178.68.195:49792). Nov 24 00:09:30.901445 sshd[5871]: Accepted publickey for core from 139.178.68.195 port 49792 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:09:30.904069 sshd-session[5871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:09:30.915921 systemd-logind[1954]: New session 23 of user core. Nov 24 00:09:30.924829 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 24 00:09:31.197934 sshd[5874]: Connection closed by 139.178.68.195 port 49792 Nov 24 00:09:31.198813 sshd-session[5871]: pam_unix(sshd:session): session closed for user core Nov 24 00:09:31.208757 systemd[1]: sshd@22-172.31.16.87:22-139.178.68.195:49792.service: Deactivated successfully. Nov 24 00:09:31.213494 systemd[1]: session-23.scope: Deactivated successfully. Nov 24 00:09:31.216705 systemd-logind[1954]: Session 23 logged out. Waiting for processes to exit. Nov 24 00:09:31.219395 systemd-logind[1954]: Removed session 23. Nov 24 00:09:32.242410 kubelet[3316]: E1124 00:09:32.242327 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jzjm6" podUID="63a82b4c-a5db-46d5-9bde-8b4be9966835" Nov 24 00:09:33.240264 kubelet[3316]: E1124 00:09:33.240200 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-44qlh" podUID="a7f2741e-c2a8-4e97-9679-431279b978f1" Nov 24 00:09:36.234950 systemd[1]: Started sshd@23-172.31.16.87:22-139.178.68.195:49806.service - OpenSSH per-connection server daemon (139.178.68.195:49806). Nov 24 00:09:36.256464 kubelet[3316]: E1124 00:09:36.256356 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-857d84d84d-ncvx2" podUID="293f9213-9ce6-465e-8d91-13e61a8f35a0" Nov 24 00:09:36.479060 sshd[5886]: Accepted publickey for core from 139.178.68.195 port 49806 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:09:36.480680 sshd-session[5886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:09:36.491870 systemd-logind[1954]: New session 24 of user core. Nov 24 00:09:36.496805 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 24 00:09:36.729586 sshd[5889]: Connection closed by 139.178.68.195 port 49806 Nov 24 00:09:36.731151 sshd-session[5886]: pam_unix(sshd:session): session closed for user core Nov 24 00:09:36.740882 systemd[1]: sshd@23-172.31.16.87:22-139.178.68.195:49806.service: Deactivated successfully. Nov 24 00:09:36.746972 systemd[1]: session-24.scope: Deactivated successfully. Nov 24 00:09:36.751161 systemd-logind[1954]: Session 24 logged out. Waiting for processes to exit. Nov 24 00:09:36.755000 systemd-logind[1954]: Removed session 24. Nov 24 00:09:37.240525 kubelet[3316]: E1124 00:09:37.240460 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dfdd4b85d-wmqzw" podUID="2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1" Nov 24 00:09:39.239222 kubelet[3316]: E1124 00:09:39.239170 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jtm28" podUID="fe389aaa-291c-4fa0-a06f-e4820906cbf6" Nov 24 00:09:41.768835 systemd[1]: Started sshd@24-172.31.16.87:22-139.178.68.195:36264.service - OpenSSH per-connection server daemon (139.178.68.195:36264). Nov 24 00:09:41.997204 sshd[5927]: Accepted publickey for core from 139.178.68.195 port 36264 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:09:42.000876 sshd-session[5927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:09:42.011079 systemd-logind[1954]: New session 25 of user core. Nov 24 00:09:42.015866 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 24 00:09:42.246517 kubelet[3316]: E1124 00:09:42.246359 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jqnrx" podUID="18d54b97-5424-4119-892c-ebd148db0571" Nov 24 00:09:42.352392 sshd[5930]: Connection closed by 139.178.68.195 port 36264 Nov 24 00:09:42.353929 sshd-session[5927]: pam_unix(sshd:session): session closed for user core Nov 24 00:09:42.361087 systemd[1]: sshd@24-172.31.16.87:22-139.178.68.195:36264.service: Deactivated successfully. Nov 24 00:09:42.361448 systemd-logind[1954]: Session 25 logged out. Waiting for processes to exit. Nov 24 00:09:42.366839 systemd[1]: session-25.scope: Deactivated successfully. Nov 24 00:09:42.372697 systemd-logind[1954]: Removed session 25. Nov 24 00:09:46.244232 kubelet[3316]: E1124 00:09:46.243370 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jzjm6" podUID="63a82b4c-a5db-46d5-9bde-8b4be9966835" Nov 24 00:09:46.245979 kubelet[3316]: E1124 00:09:46.243795 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-44qlh" podUID="a7f2741e-c2a8-4e97-9679-431279b978f1" Nov 24 00:09:46.693026 update_engine[1961]: I20251124 00:09:46.692811 1961 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 24 00:09:46.693026 update_engine[1961]: I20251124 00:09:46.692957 1961 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 24 00:09:46.696507 update_engine[1961]: I20251124 00:09:46.696372 1961 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 24 00:09:46.698300 update_engine[1961]: I20251124 00:09:46.698168 1961 omaha_request_params.cc:62] Current group set to stable Nov 24 00:09:46.698855 update_engine[1961]: I20251124 00:09:46.698705 1961 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 24 00:09:46.698855 update_engine[1961]: I20251124 00:09:46.698730 1961 update_attempter.cc:643] Scheduling an action processor start. Nov 24 00:09:46.698855 update_engine[1961]: I20251124 00:09:46.698761 1961 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 24 00:09:46.699602 update_engine[1961]: I20251124 00:09:46.699431 1961 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 24 00:09:46.699602 update_engine[1961]: I20251124 00:09:46.699544 1961 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 24 00:09:46.699945 update_engine[1961]: I20251124 00:09:46.699556 1961 omaha_request_action.cc:272] Request: Nov 24 00:09:46.699945 update_engine[1961]: Nov 24 00:09:46.699945 update_engine[1961]: Nov 24 00:09:46.699945 update_engine[1961]: Nov 24 00:09:46.699945 update_engine[1961]: Nov 24 00:09:46.699945 update_engine[1961]: Nov 24 00:09:46.699945 update_engine[1961]: Nov 24 00:09:46.699945 update_engine[1961]: Nov 24 00:09:46.699945 update_engine[1961]: Nov 24 00:09:46.699945 update_engine[1961]: I20251124 00:09:46.699682 1961 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 24 00:09:46.728604 update_engine[1961]: I20251124 00:09:46.725772 1961 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 24 00:09:46.728604 update_engine[1961]: I20251124 00:09:46.726639 1961 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 24 00:09:46.741605 update_engine[1961]: E20251124 00:09:46.737982 1961 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 24 00:09:46.741605 update_engine[1961]: I20251124 00:09:46.738128 1961 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 24 00:09:46.744259 locksmithd[2016]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 24 00:09:47.391797 systemd[1]: Started sshd@25-172.31.16.87:22-139.178.68.195:36272.service - OpenSSH per-connection server daemon (139.178.68.195:36272). Nov 24 00:09:47.621144 sshd[5946]: Accepted publickey for core from 139.178.68.195 port 36272 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:09:47.624851 sshd-session[5946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:09:47.647655 systemd-logind[1954]: New session 26 of user core. Nov 24 00:09:47.652946 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 24 00:09:48.094289 sshd[5949]: Connection closed by 139.178.68.195 port 36272 Nov 24 00:09:48.097861 sshd-session[5946]: pam_unix(sshd:session): session closed for user core Nov 24 00:09:48.106870 systemd-logind[1954]: Session 26 logged out. Waiting for processes to exit. Nov 24 00:09:48.108024 systemd[1]: sshd@25-172.31.16.87:22-139.178.68.195:36272.service: Deactivated successfully. Nov 24 00:09:48.114600 systemd[1]: session-26.scope: Deactivated successfully. Nov 24 00:09:48.120432 systemd-logind[1954]: Removed session 26. Nov 24 00:09:50.241652 kubelet[3316]: E1124 00:09:50.241576 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-857d84d84d-ncvx2" podUID="293f9213-9ce6-465e-8d91-13e61a8f35a0" Nov 24 00:09:52.246584 kubelet[3316]: E1124 00:09:52.245525 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dfdd4b85d-wmqzw" podUID="2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1" Nov 24 00:09:53.134916 systemd[1]: Started sshd@26-172.31.16.87:22-139.178.68.195:59480.service - OpenSSH per-connection server daemon (139.178.68.195:59480). Nov 24 00:09:53.382730 sshd[5965]: Accepted publickey for core from 139.178.68.195 port 59480 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:09:53.385967 sshd-session[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:09:53.396524 systemd-logind[1954]: New session 27 of user core. Nov 24 00:09:53.406666 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 24 00:09:53.757557 sshd[5968]: Connection closed by 139.178.68.195 port 59480 Nov 24 00:09:53.758698 sshd-session[5965]: pam_unix(sshd:session): session closed for user core Nov 24 00:09:53.764872 systemd-logind[1954]: Session 27 logged out. Waiting for processes to exit. Nov 24 00:09:53.766073 systemd[1]: sshd@26-172.31.16.87:22-139.178.68.195:59480.service: Deactivated successfully. Nov 24 00:09:53.770667 systemd[1]: session-27.scope: Deactivated successfully. Nov 24 00:09:53.777605 systemd-logind[1954]: Removed session 27. Nov 24 00:09:54.239705 kubelet[3316]: E1124 00:09:54.239377 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jtm28" podUID="fe389aaa-291c-4fa0-a06f-e4820906cbf6" Nov 24 00:09:55.241160 kubelet[3316]: E1124 00:09:55.241095 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jqnrx" podUID="18d54b97-5424-4119-892c-ebd148db0571" Nov 24 00:09:56.619739 update_engine[1961]: I20251124 00:09:56.619648 1961 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 24 00:09:56.620329 update_engine[1961]: I20251124 00:09:56.619789 1961 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 24 00:09:56.620329 update_engine[1961]: I20251124 00:09:56.620270 1961 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 24 00:09:56.624709 update_engine[1961]: E20251124 00:09:56.624642 1961 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 24 00:09:56.624876 update_engine[1961]: I20251124 00:09:56.624790 1961 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 24 00:10:01.242776 kubelet[3316]: E1124 00:10:01.242425 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-857d84d84d-ncvx2" podUID="293f9213-9ce6-465e-8d91-13e61a8f35a0" Nov 24 00:10:01.269869 kubelet[3316]: E1124 00:10:01.269714 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-44qlh" podUID="a7f2741e-c2a8-4e97-9679-431279b978f1" Nov 24 00:10:01.292432 containerd[1972]: time="2025-11-24T00:10:01.242773726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:10:01.571199 containerd[1972]: time="2025-11-24T00:10:01.571136988Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:01.573466 containerd[1972]: time="2025-11-24T00:10:01.573380692Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:10:01.573663 containerd[1972]: time="2025-11-24T00:10:01.573388234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:10:01.573875 kubelet[3316]: E1124 00:10:01.573765 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:10:01.573980 kubelet[3316]: E1124 00:10:01.573895 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:10:01.574146 kubelet[3316]: E1124 00:10:01.574089 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cx9h5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cc86c6ddc-jzjm6_calico-apiserver(63a82b4c-a5db-46d5-9bde-8b4be9966835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:01.575382 kubelet[3316]: E1124 00:10:01.575336 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jzjm6" podUID="63a82b4c-a5db-46d5-9bde-8b4be9966835" Nov 24 00:10:06.627731 update_engine[1961]: I20251124 00:10:06.627468 1961 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 24 00:10:06.628432 update_engine[1961]: I20251124 00:10:06.627729 1961 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 24 00:10:06.628432 update_engine[1961]: I20251124 00:10:06.628240 1961 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 24 00:10:06.631018 update_engine[1961]: E20251124 00:10:06.630970 1961 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 24 00:10:06.631147 update_engine[1961]: I20251124 00:10:06.631104 1961 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 24 00:10:07.188238 systemd[1]: cri-containerd-7d8f7c05d28c844d44d0117c65af50884ef08b6659b93bda4695570777537237.scope: Deactivated successfully. Nov 24 00:10:07.189270 systemd[1]: cri-containerd-7d8f7c05d28c844d44d0117c65af50884ef08b6659b93bda4695570777537237.scope: Consumed 16.278s CPU time, 105.5M memory peak, 45.1M read from disk. Nov 24 00:10:07.238428 containerd[1972]: time="2025-11-24T00:10:07.238122867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:10:07.269202 containerd[1972]: time="2025-11-24T00:10:07.269143729Z" level=info msg="received container exit event container_id:\"7d8f7c05d28c844d44d0117c65af50884ef08b6659b93bda4695570777537237\" id:\"7d8f7c05d28c844d44d0117c65af50884ef08b6659b93bda4695570777537237\" pid:3914 exit_status:1 exited_at:{seconds:1763943007 nanos:214998563}" Nov 24 00:10:07.335407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d8f7c05d28c844d44d0117c65af50884ef08b6659b93bda4695570777537237-rootfs.mount: Deactivated successfully. Nov 24 00:10:07.507835 containerd[1972]: time="2025-11-24T00:10:07.507758772Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:07.510446 containerd[1972]: time="2025-11-24T00:10:07.510373703Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:10:07.510726 containerd[1972]: time="2025-11-24T00:10:07.510408527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:10:07.510790 kubelet[3316]: E1124 00:10:07.510711 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:10:07.510790 kubelet[3316]: E1124 00:10:07.510776 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:10:07.511604 kubelet[3316]: E1124 00:10:07.510937 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:de3c06f5af6f4d53b271c97ff9b037fd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-68g7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dfdd4b85d-wmqzw_calico-system(2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:07.515687 containerd[1972]: time="2025-11-24T00:10:07.515631359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:10:07.824018 containerd[1972]: time="2025-11-24T00:10:07.823818303Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:07.826983 containerd[1972]: time="2025-11-24T00:10:07.826818214Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:10:07.826983 containerd[1972]: time="2025-11-24T00:10:07.826892783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:10:07.827492 kubelet[3316]: E1124 00:10:07.827419 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:10:07.827580 kubelet[3316]: E1124 00:10:07.827487 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:10:07.828308 kubelet[3316]: E1124 00:10:07.827713 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68g7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dfdd4b85d-wmqzw_calico-system(2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:07.829498 kubelet[3316]: E1124 00:10:07.829429 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dfdd4b85d-wmqzw" podUID="2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1" Nov 24 00:10:08.241693 containerd[1972]: time="2025-11-24T00:10:08.241277729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:10:08.256073 systemd[1]: cri-containerd-2aa8a020e793dc99ea26770368ef65d4a7d6a29d548cc2dd1db2295733cd6aa2.scope: Deactivated successfully. Nov 24 00:10:08.256374 systemd[1]: cri-containerd-2aa8a020e793dc99ea26770368ef65d4a7d6a29d548cc2dd1db2295733cd6aa2.scope: Consumed 5.132s CPU time, 92.8M memory peak, 70.6M read from disk. Nov 24 00:10:08.264955 containerd[1972]: time="2025-11-24T00:10:08.264892410Z" level=info msg="received container exit event container_id:\"2aa8a020e793dc99ea26770368ef65d4a7d6a29d548cc2dd1db2295733cd6aa2\" id:\"2aa8a020e793dc99ea26770368ef65d4a7d6a29d548cc2dd1db2295733cd6aa2\" pid:3154 exit_status:1 exited_at:{seconds:1763943008 nanos:264283841}" Nov 24 00:10:08.336897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2aa8a020e793dc99ea26770368ef65d4a7d6a29d548cc2dd1db2295733cd6aa2-rootfs.mount: Deactivated successfully. Nov 24 00:10:08.349320 kubelet[3316]: I1124 00:10:08.349259 3316 scope.go:117] "RemoveContainer" containerID="7d8f7c05d28c844d44d0117c65af50884ef08b6659b93bda4695570777537237" Nov 24 00:10:08.376233 containerd[1972]: time="2025-11-24T00:10:08.376113894Z" level=info msg="CreateContainer within sandbox \"49f2fcd7f98ed56e4654c19eae452c366b40550a70be617544f96333f1ced142\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 24 00:10:08.451280 containerd[1972]: time="2025-11-24T00:10:08.451113991Z" level=info msg="Container 51991ca8a27ce9a373154541bd617698a7bf32cf6928e3bdfbb69f3b74ea58b2: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:10:08.495540 containerd[1972]: time="2025-11-24T00:10:08.494920583Z" level=info msg="CreateContainer within sandbox \"49f2fcd7f98ed56e4654c19eae452c366b40550a70be617544f96333f1ced142\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"51991ca8a27ce9a373154541bd617698a7bf32cf6928e3bdfbb69f3b74ea58b2\"" Nov 24 00:10:08.496829 containerd[1972]: time="2025-11-24T00:10:08.495927958Z" level=info msg="StartContainer for \"51991ca8a27ce9a373154541bd617698a7bf32cf6928e3bdfbb69f3b74ea58b2\"" Nov 24 00:10:08.497573 containerd[1972]: time="2025-11-24T00:10:08.497526511Z" level=info msg="connecting to shim 51991ca8a27ce9a373154541bd617698a7bf32cf6928e3bdfbb69f3b74ea58b2" address="unix:///run/containerd/s/c7e793adc2a60379c6220bfdea6c12b7244d9f630a6d921ac1c8d3593d2d2f71" protocol=ttrpc version=3 Nov 24 00:10:08.523481 containerd[1972]: time="2025-11-24T00:10:08.523291229Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:08.525629 containerd[1972]: time="2025-11-24T00:10:08.525467322Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:10:08.525946 containerd[1972]: time="2025-11-24T00:10:08.525926612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:10:08.526650 kubelet[3316]: E1124 00:10:08.526196 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:10:08.526954 systemd[1]: Started cri-containerd-51991ca8a27ce9a373154541bd617698a7bf32cf6928e3bdfbb69f3b74ea58b2.scope - libcontainer container 51991ca8a27ce9a373154541bd617698a7bf32cf6928e3bdfbb69f3b74ea58b2. Nov 24 00:10:08.529165 kubelet[3316]: E1124 00:10:08.528681 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:10:08.529165 kubelet[3316]: E1124 00:10:08.529085 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5n9pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cc86c6ddc-jtm28_calico-apiserver(fe389aaa-291c-4fa0-a06f-e4820906cbf6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:08.531655 kubelet[3316]: E1124 00:10:08.531423 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jtm28" podUID="fe389aaa-291c-4fa0-a06f-e4820906cbf6" Nov 24 00:10:08.532197 containerd[1972]: time="2025-11-24T00:10:08.532062022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:10:08.608754 containerd[1972]: time="2025-11-24T00:10:08.608698803Z" level=info msg="StartContainer for \"51991ca8a27ce9a373154541bd617698a7bf32cf6928e3bdfbb69f3b74ea58b2\" returns successfully" Nov 24 00:10:08.837830 containerd[1972]: time="2025-11-24T00:10:08.837204216Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:08.840709 containerd[1972]: time="2025-11-24T00:10:08.840604768Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:10:08.841066 containerd[1972]: time="2025-11-24T00:10:08.840628858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:10:08.841261 kubelet[3316]: E1124 00:10:08.841173 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:10:08.841345 kubelet[3316]: E1124 00:10:08.841256 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:10:08.841645 kubelet[3316]: E1124 00:10:08.841545 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x9s7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jqnrx_calico-system(18d54b97-5424-4119-892c-ebd148db0571): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:08.842831 kubelet[3316]: E1124 00:10:08.842781 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jqnrx" podUID="18d54b97-5424-4119-892c-ebd148db0571" Nov 24 00:10:09.333777 kubelet[3316]: I1124 00:10:09.333737 3316 scope.go:117] "RemoveContainer" containerID="2aa8a020e793dc99ea26770368ef65d4a7d6a29d548cc2dd1db2295733cd6aa2" Nov 24 00:10:09.338582 containerd[1972]: time="2025-11-24T00:10:09.337630911Z" level=info msg="CreateContainer within sandbox \"178c879106c4208d062a1ebd8fa7da1feb981dc2609a257ebeb7c7ea3c568db4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 24 00:10:09.365230 containerd[1972]: time="2025-11-24T00:10:09.365171697Z" level=info msg="Container 43086ea1f4ccd9ed5d7ccf1a085457596d2a8809d3c20a807527cb96642b7eb7: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:10:09.366191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3701231492.mount: Deactivated successfully. Nov 24 00:10:09.388071 containerd[1972]: time="2025-11-24T00:10:09.387338635Z" level=info msg="CreateContainer within sandbox \"178c879106c4208d062a1ebd8fa7da1feb981dc2609a257ebeb7c7ea3c568db4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"43086ea1f4ccd9ed5d7ccf1a085457596d2a8809d3c20a807527cb96642b7eb7\"" Nov 24 00:10:09.389288 containerd[1972]: time="2025-11-24T00:10:09.389204500Z" level=info msg="StartContainer for \"43086ea1f4ccd9ed5d7ccf1a085457596d2a8809d3c20a807527cb96642b7eb7\"" Nov 24 00:10:09.392134 containerd[1972]: time="2025-11-24T00:10:09.392077936Z" level=info msg="connecting to shim 43086ea1f4ccd9ed5d7ccf1a085457596d2a8809d3c20a807527cb96642b7eb7" address="unix:///run/containerd/s/588e2715898a6ef941babed710e3ac511110f270ded9ddf3dc2ddef1fea7123b" protocol=ttrpc version=3 Nov 24 00:10:09.425908 systemd[1]: Started cri-containerd-43086ea1f4ccd9ed5d7ccf1a085457596d2a8809d3c20a807527cb96642b7eb7.scope - libcontainer container 43086ea1f4ccd9ed5d7ccf1a085457596d2a8809d3c20a807527cb96642b7eb7. Nov 24 00:10:09.539974 containerd[1972]: time="2025-11-24T00:10:09.539911249Z" level=info msg="StartContainer for \"43086ea1f4ccd9ed5d7ccf1a085457596d2a8809d3c20a807527cb96642b7eb7\" returns successfully" Nov 24 00:10:12.429315 systemd[1]: cri-containerd-ab66731f06deea66838e04895f5c538bbc11459e33cbeb46624ccaf44bdd67cc.scope: Deactivated successfully. Nov 24 00:10:12.430308 systemd[1]: cri-containerd-ab66731f06deea66838e04895f5c538bbc11459e33cbeb46624ccaf44bdd67cc.scope: Consumed 2.853s CPU time, 38.5M memory peak, 36.1M read from disk. Nov 24 00:10:12.434251 containerd[1972]: time="2025-11-24T00:10:12.434091459Z" level=info msg="received container exit event container_id:\"ab66731f06deea66838e04895f5c538bbc11459e33cbeb46624ccaf44bdd67cc\" id:\"ab66731f06deea66838e04895f5c538bbc11459e33cbeb46624ccaf44bdd67cc\" pid:3125 exit_status:1 exited_at:{seconds:1763943012 nanos:433326670}" Nov 24 00:10:12.469832 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab66731f06deea66838e04895f5c538bbc11459e33cbeb46624ccaf44bdd67cc-rootfs.mount: Deactivated successfully. Nov 24 00:10:13.238250 containerd[1972]: time="2025-11-24T00:10:13.238158408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:10:13.354818 kubelet[3316]: I1124 00:10:13.354783 3316 scope.go:117] "RemoveContainer" containerID="ab66731f06deea66838e04895f5c538bbc11459e33cbeb46624ccaf44bdd67cc" Nov 24 00:10:13.358584 containerd[1972]: time="2025-11-24T00:10:13.357877013Z" level=info msg="CreateContainer within sandbox \"8c8861d14e03edfb337691bd70a0b1a7cde359180bd4964b0f5a02050c535e69\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 24 00:10:13.383456 containerd[1972]: time="2025-11-24T00:10:13.383383323Z" level=info msg="Container 8b353fa094019feb1305868de7c1dbd30494fe9b639c0666b70151186cad4a1c: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:10:13.393557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2389305396.mount: Deactivated successfully. Nov 24 00:10:13.401340 containerd[1972]: time="2025-11-24T00:10:13.401283630Z" level=info msg="CreateContainer within sandbox \"8c8861d14e03edfb337691bd70a0b1a7cde359180bd4964b0f5a02050c535e69\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8b353fa094019feb1305868de7c1dbd30494fe9b639c0666b70151186cad4a1c\"" Nov 24 00:10:13.402601 containerd[1972]: time="2025-11-24T00:10:13.401984889Z" level=info msg="StartContainer for \"8b353fa094019feb1305868de7c1dbd30494fe9b639c0666b70151186cad4a1c\"" Nov 24 00:10:13.403468 containerd[1972]: time="2025-11-24T00:10:13.403417331Z" level=info msg="connecting to shim 8b353fa094019feb1305868de7c1dbd30494fe9b639c0666b70151186cad4a1c" address="unix:///run/containerd/s/51ef7e71328705b53e218e9b8b35520d189742055afab3632bc817a8033fd943" protocol=ttrpc version=3 Nov 24 00:10:13.433502 systemd[1]: Started cri-containerd-8b353fa094019feb1305868de7c1dbd30494fe9b639c0666b70151186cad4a1c.scope - libcontainer container 8b353fa094019feb1305868de7c1dbd30494fe9b639c0666b70151186cad4a1c. Nov 24 00:10:13.502935 containerd[1972]: time="2025-11-24T00:10:13.502869383Z" level=info msg="StartContainer for \"8b353fa094019feb1305868de7c1dbd30494fe9b639c0666b70151186cad4a1c\" returns successfully" Nov 24 00:10:13.506471 containerd[1972]: time="2025-11-24T00:10:13.506392215Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:13.508885 containerd[1972]: time="2025-11-24T00:10:13.508805720Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:10:13.509147 containerd[1972]: time="2025-11-24T00:10:13.508923599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:10:13.509203 kubelet[3316]: E1124 00:10:13.509155 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:10:13.509417 kubelet[3316]: E1124 00:10:13.509274 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:10:13.509532 kubelet[3316]: E1124 00:10:13.509474 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hqp8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-857d84d84d-ncvx2_calico-system(293f9213-9ce6-465e-8d91-13e61a8f35a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:13.510753 kubelet[3316]: E1124 00:10:13.510685 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-857d84d84d-ncvx2" podUID="293f9213-9ce6-465e-8d91-13e61a8f35a0" Nov 24 00:10:14.242207 containerd[1972]: time="2025-11-24T00:10:14.242149310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:10:14.561585 containerd[1972]: time="2025-11-24T00:10:14.561506510Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:14.564578 containerd[1972]: time="2025-11-24T00:10:14.564473366Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:10:14.564905 containerd[1972]: time="2025-11-24T00:10:14.564780245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:10:14.565258 kubelet[3316]: E1124 00:10:14.565197 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:10:14.566122 kubelet[3316]: E1124 00:10:14.565753 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:10:14.566122 kubelet[3316]: E1124 00:10:14.565984 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvb8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-44qlh_calico-system(a7f2741e-c2a8-4e97-9679-431279b978f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:14.568980 containerd[1972]: time="2025-11-24T00:10:14.568788453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:10:14.822858 containerd[1972]: time="2025-11-24T00:10:14.822288933Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:10:14.824751 containerd[1972]: time="2025-11-24T00:10:14.824582912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:10:14.824751 containerd[1972]: time="2025-11-24T00:10:14.824688820Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:10:14.825161 kubelet[3316]: E1124 00:10:14.825104 3316 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:10:14.825248 kubelet[3316]: E1124 00:10:14.825191 3316 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:10:14.825515 kubelet[3316]: E1124 00:10:14.825439 3316 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvb8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-44qlh_calico-system(a7f2741e-c2a8-4e97-9679-431279b978f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:10:14.826721 kubelet[3316]: E1124 00:10:14.826667 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-44qlh" podUID="a7f2741e-c2a8-4e97-9679-431279b978f1" Nov 24 00:10:14.998855 kubelet[3316]: E1124 00:10:14.975808 3316 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-87?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 00:10:15.238008 kubelet[3316]: E1124 00:10:15.237883 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jzjm6" podUID="63a82b4c-a5db-46d5-9bde-8b4be9966835" Nov 24 00:10:16.620860 update_engine[1961]: I20251124 00:10:16.620783 1961 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 24 00:10:16.621277 update_engine[1961]: I20251124 00:10:16.620888 1961 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 24 00:10:16.621322 update_engine[1961]: I20251124 00:10:16.621286 1961 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 24 00:10:16.622881 update_engine[1961]: E20251124 00:10:16.622805 1961 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 24 00:10:16.623061 update_engine[1961]: I20251124 00:10:16.622956 1961 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 24 00:10:16.623061 update_engine[1961]: I20251124 00:10:16.622982 1961 omaha_request_action.cc:617] Omaha request response: Nov 24 00:10:16.623387 update_engine[1961]: E20251124 00:10:16.623090 1961 omaha_request_action.cc:636] Omaha request network transfer failed. Nov 24 00:10:16.631763 update_engine[1961]: I20251124 00:10:16.630874 1961 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Nov 24 00:10:16.631763 update_engine[1961]: I20251124 00:10:16.630932 1961 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 24 00:10:16.631763 update_engine[1961]: I20251124 00:10:16.630941 1961 update_attempter.cc:306] Processing Done. Nov 24 00:10:16.631763 update_engine[1961]: E20251124 00:10:16.630971 1961 update_attempter.cc:619] Update failed. Nov 24 00:10:16.631763 update_engine[1961]: I20251124 00:10:16.630983 1961 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Nov 24 00:10:16.631763 update_engine[1961]: I20251124 00:10:16.630991 1961 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Nov 24 00:10:16.631763 update_engine[1961]: I20251124 00:10:16.630999 1961 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Nov 24 00:10:16.631763 update_engine[1961]: I20251124 00:10:16.631115 1961 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 24 00:10:16.631763 update_engine[1961]: I20251124 00:10:16.631163 1961 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 24 00:10:16.631763 update_engine[1961]: I20251124 00:10:16.631171 1961 omaha_request_action.cc:272] Request: Nov 24 00:10:16.631763 update_engine[1961]: Nov 24 00:10:16.631763 update_engine[1961]: Nov 24 00:10:16.631763 update_engine[1961]: Nov 24 00:10:16.631763 update_engine[1961]: Nov 24 00:10:16.631763 update_engine[1961]: Nov 24 00:10:16.631763 update_engine[1961]: Nov 24 00:10:16.631763 update_engine[1961]: I20251124 00:10:16.631179 1961 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 24 00:10:16.631763 update_engine[1961]: I20251124 00:10:16.631215 1961 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 24 00:10:16.631763 update_engine[1961]: I20251124 00:10:16.631698 1961 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 24 00:10:16.633609 locksmithd[2016]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Nov 24 00:10:16.634548 update_engine[1961]: E20251124 00:10:16.634240 1961 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 24 00:10:16.634548 update_engine[1961]: I20251124 00:10:16.634341 1961 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 24 00:10:16.634548 update_engine[1961]: I20251124 00:10:16.634351 1961 omaha_request_action.cc:617] Omaha request response: Nov 24 00:10:16.634548 update_engine[1961]: I20251124 00:10:16.634362 1961 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 24 00:10:16.634548 update_engine[1961]: I20251124 00:10:16.634370 1961 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 24 00:10:16.634548 update_engine[1961]: I20251124 00:10:16.634379 1961 update_attempter.cc:306] Processing Done. Nov 24 00:10:16.634548 update_engine[1961]: I20251124 00:10:16.634388 1961 update_attempter.cc:310] Error event sent. Nov 24 00:10:16.634548 update_engine[1961]: I20251124 00:10:16.634402 1961 update_check_scheduler.cc:74] Next update check in 41m14s Nov 24 00:10:16.634888 locksmithd[2016]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Nov 24 00:10:19.237898 kubelet[3316]: E1124 00:10:19.237731 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cc86c6ddc-jtm28" podUID="fe389aaa-291c-4fa0-a06f-e4820906cbf6" Nov 24 00:10:19.238511 kubelet[3316]: E1124 00:10:19.238341 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dfdd4b85d-wmqzw" podUID="2e5b90ac-d808-4aaf-9a8a-1acb3e1260f1" Nov 24 00:10:20.356074 systemd[1]: cri-containerd-51991ca8a27ce9a373154541bd617698a7bf32cf6928e3bdfbb69f3b74ea58b2.scope: Deactivated successfully. Nov 24 00:10:20.356382 systemd[1]: cri-containerd-51991ca8a27ce9a373154541bd617698a7bf32cf6928e3bdfbb69f3b74ea58b2.scope: Consumed 398ms CPU time, 65.2M memory peak, 30.1M read from disk. Nov 24 00:10:20.357846 containerd[1972]: time="2025-11-24T00:10:20.357772551Z" level=info msg="received container exit event container_id:\"51991ca8a27ce9a373154541bd617698a7bf32cf6928e3bdfbb69f3b74ea58b2\" id:\"51991ca8a27ce9a373154541bd617698a7bf32cf6928e3bdfbb69f3b74ea58b2\" pid:6027 exit_status:1 exited_at:{seconds:1763943020 nanos:357241293}" Nov 24 00:10:20.389523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51991ca8a27ce9a373154541bd617698a7bf32cf6928e3bdfbb69f3b74ea58b2-rootfs.mount: Deactivated successfully. Nov 24 00:10:21.238303 kubelet[3316]: E1124 00:10:21.238232 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jqnrx" podUID="18d54b97-5424-4119-892c-ebd148db0571" Nov 24 00:10:21.386371 kubelet[3316]: I1124 00:10:21.386339 3316 scope.go:117] "RemoveContainer" containerID="7d8f7c05d28c844d44d0117c65af50884ef08b6659b93bda4695570777537237" Nov 24 00:10:21.387328 kubelet[3316]: I1124 00:10:21.386759 3316 scope.go:117] "RemoveContainer" containerID="51991ca8a27ce9a373154541bd617698a7bf32cf6928e3bdfbb69f3b74ea58b2" Nov 24 00:10:21.387328 kubelet[3316]: E1124 00:10:21.386976 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-pjjcb_tigera-operator(eb9e51cc-0859-4462-8e20-303778b4efc4)\"" pod="tigera-operator/tigera-operator-7dcd859c48-pjjcb" podUID="eb9e51cc-0859-4462-8e20-303778b4efc4" Nov 24 00:10:21.462877 containerd[1972]: time="2025-11-24T00:10:21.462815479Z" level=info msg="RemoveContainer for \"7d8f7c05d28c844d44d0117c65af50884ef08b6659b93bda4695570777537237\"" Nov 24 00:10:21.514709 containerd[1972]: time="2025-11-24T00:10:21.514647323Z" level=info msg="RemoveContainer for \"7d8f7c05d28c844d44d0117c65af50884ef08b6659b93bda4695570777537237\" returns successfully" Nov 24 00:10:24.999695 kubelet[3316]: E1124 00:10:24.999599 3316 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-87?timeout=10s\": context deadline exceeded" Nov 24 00:10:28.237995 kubelet[3316]: E1124 00:10:28.237833 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-857d84d84d-ncvx2" podUID="293f9213-9ce6-465e-8d91-13e61a8f35a0"