Nov 24 00:32:48.891575 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Nov 23 20:54:38 -00 2025 Nov 24 00:32:48.891614 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1969a6ee0c0ec5507eb68849c160e94c58e52d2291c767873af68a1f52b30801 Nov 24 00:32:48.891633 kernel: BIOS-provided physical RAM map: Nov 24 00:32:48.891645 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 24 00:32:48.891656 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Nov 24 00:32:48.891667 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Nov 24 00:32:48.891682 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Nov 24 00:32:48.891694 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Nov 24 00:32:48.891706 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Nov 24 00:32:48.891718 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Nov 24 00:32:48.891730 kernel: NX (Execute Disable) protection: active Nov 24 00:32:48.891745 kernel: APIC: Static calls initialized Nov 24 00:32:48.891757 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Nov 24 00:32:48.891770 kernel: extended physical RAM map: Nov 24 00:32:48.891785 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 24 00:32:48.891799 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Nov 24 00:32:48.891815 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Nov 24 00:32:48.891829 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Nov 24 00:32:48.892632 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Nov 24 00:32:48.892651 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Nov 24 00:32:48.892666 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Nov 24 00:32:48.892680 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Nov 24 00:32:48.892693 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Nov 24 00:32:48.892705 kernel: efi: EFI v2.7 by EDK II Nov 24 00:32:48.892718 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77002518 Nov 24 00:32:48.892730 kernel: secureboot: Secure boot disabled Nov 24 00:32:48.892744 kernel: SMBIOS 2.7 present. Nov 24 00:32:48.892762 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Nov 24 00:32:48.892775 kernel: DMI: Memory slots populated: 1/1 Nov 24 00:32:48.892787 kernel: Hypervisor detected: KVM Nov 24 00:32:48.892800 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Nov 24 00:32:48.892813 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 24 00:32:48.892825 kernel: kvm-clock: using sched offset of 6231946187 cycles Nov 24 00:32:48.892873 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 24 00:32:48.892887 kernel: tsc: Detected 2499.996 MHz processor Nov 24 00:32:48.892901 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 24 00:32:48.892914 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 24 00:32:48.892927 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Nov 24 00:32:48.892944 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 24 00:32:48.892958 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 24 00:32:48.892976 kernel: Using GB pages for direct mapping Nov 24 00:32:48.892989 kernel: ACPI: Early table checksum verification disabled Nov 24 00:32:48.893002 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Nov 24 00:32:48.893015 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Nov 24 00:32:48.893032 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 24 00:32:48.893047 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Nov 24 00:32:48.893061 kernel: ACPI: FACS 0x00000000789D0000 000040 Nov 24 00:32:48.893074 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Nov 24 00:32:48.893088 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 24 00:32:48.893103 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 24 00:32:48.893117 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Nov 24 00:32:48.893131 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Nov 24 00:32:48.893149 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 24 00:32:48.893164 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 24 00:32:48.893178 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Nov 24 00:32:48.893192 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Nov 24 00:32:48.893207 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Nov 24 00:32:48.893221 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Nov 24 00:32:48.893235 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Nov 24 00:32:48.893250 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Nov 24 00:32:48.893264 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Nov 24 00:32:48.893281 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Nov 24 00:32:48.893294 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Nov 24 00:32:48.893307 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Nov 24 00:32:48.893320 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Nov 24 00:32:48.893333 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Nov 24 00:32:48.893347 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Nov 24 00:32:48.893361 kernel: NUMA: Initialized distance table, cnt=1 Nov 24 00:32:48.893373 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Nov 24 00:32:48.893387 kernel: Zone ranges: Nov 24 00:32:48.893406 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 24 00:32:48.893422 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Nov 24 00:32:48.893436 kernel: Normal empty Nov 24 00:32:48.893449 kernel: Device empty Nov 24 00:32:48.893462 kernel: Movable zone start for each node Nov 24 00:32:48.893475 kernel: Early memory node ranges Nov 24 00:32:48.893487 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 24 00:32:48.893500 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Nov 24 00:32:48.893513 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Nov 24 00:32:48.893529 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Nov 24 00:32:48.893543 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 24 00:32:48.893556 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 24 00:32:48.893569 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 24 00:32:48.893582 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Nov 24 00:32:48.893594 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 24 00:32:48.893607 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 24 00:32:48.893621 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Nov 24 00:32:48.893634 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 24 00:32:48.893647 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 24 00:32:48.894888 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 24 00:32:48.894910 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 24 00:32:48.894925 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 24 00:32:48.894941 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 24 00:32:48.894956 kernel: TSC deadline timer available Nov 24 00:32:48.894970 kernel: CPU topo: Max. logical packages: 1 Nov 24 00:32:48.894985 kernel: CPU topo: Max. logical dies: 1 Nov 24 00:32:48.895000 kernel: CPU topo: Max. dies per package: 1 Nov 24 00:32:48.895014 kernel: CPU topo: Max. threads per core: 2 Nov 24 00:32:48.895033 kernel: CPU topo: Num. cores per package: 1 Nov 24 00:32:48.895048 kernel: CPU topo: Num. threads per package: 2 Nov 24 00:32:48.895062 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 24 00:32:48.895077 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 24 00:32:48.895092 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Nov 24 00:32:48.895107 kernel: Booting paravirtualized kernel on KVM Nov 24 00:32:48.895122 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 24 00:32:48.895137 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 24 00:32:48.895153 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 24 00:32:48.895171 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 24 00:32:48.895185 kernel: pcpu-alloc: [0] 0 1 Nov 24 00:32:48.895200 kernel: kvm-guest: PV spinlocks enabled Nov 24 00:32:48.895215 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 24 00:32:48.895232 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1969a6ee0c0ec5507eb68849c160e94c58e52d2291c767873af68a1f52b30801 Nov 24 00:32:48.895248 kernel: random: crng init done Nov 24 00:32:48.895262 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 24 00:32:48.895277 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 24 00:32:48.895295 kernel: Fallback order for Node 0: 0 Nov 24 00:32:48.895310 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Nov 24 00:32:48.895326 kernel: Policy zone: DMA32 Nov 24 00:32:48.895352 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 24 00:32:48.895370 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 24 00:32:48.895386 kernel: Kernel/User page tables isolation: enabled Nov 24 00:32:48.895402 kernel: ftrace: allocating 40103 entries in 157 pages Nov 24 00:32:48.895417 kernel: ftrace: allocated 157 pages with 5 groups Nov 24 00:32:48.895433 kernel: Dynamic Preempt: voluntary Nov 24 00:32:48.895448 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 24 00:32:48.895465 kernel: rcu: RCU event tracing is enabled. Nov 24 00:32:48.895481 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 24 00:32:48.895500 kernel: Trampoline variant of Tasks RCU enabled. Nov 24 00:32:48.895516 kernel: Rude variant of Tasks RCU enabled. Nov 24 00:32:48.895532 kernel: Tracing variant of Tasks RCU enabled. Nov 24 00:32:48.895547 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 24 00:32:48.895563 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 24 00:32:48.895582 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 00:32:48.895598 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 00:32:48.895614 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 00:32:48.895630 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 24 00:32:48.895645 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 24 00:32:48.895661 kernel: Console: colour dummy device 80x25 Nov 24 00:32:48.895677 kernel: printk: legacy console [tty0] enabled Nov 24 00:32:48.895692 kernel: printk: legacy console [ttyS0] enabled Nov 24 00:32:48.895708 kernel: ACPI: Core revision 20240827 Nov 24 00:32:48.895727 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Nov 24 00:32:48.895743 kernel: APIC: Switch to symmetric I/O mode setup Nov 24 00:32:48.895759 kernel: x2apic enabled Nov 24 00:32:48.895775 kernel: APIC: Switched APIC routing to: physical x2apic Nov 24 00:32:48.895790 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Nov 24 00:32:48.895806 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Nov 24 00:32:48.895822 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 24 00:32:48.896873 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 24 00:32:48.896900 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 24 00:32:48.896921 kernel: Spectre V2 : Mitigation: Retpolines Nov 24 00:32:48.896937 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 24 00:32:48.896952 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 24 00:32:48.896969 kernel: RETBleed: Vulnerable Nov 24 00:32:48.896985 kernel: Speculative Store Bypass: Vulnerable Nov 24 00:32:48.897000 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Nov 24 00:32:48.897016 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 24 00:32:48.897031 kernel: GDS: Unknown: Dependent on hypervisor status Nov 24 00:32:48.897047 kernel: active return thunk: its_return_thunk Nov 24 00:32:48.897062 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 24 00:32:48.897078 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 24 00:32:48.897097 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 24 00:32:48.897113 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 24 00:32:48.897128 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 24 00:32:48.897144 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 24 00:32:48.897160 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 24 00:32:48.897176 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 24 00:32:48.897191 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 24 00:32:48.897207 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 24 00:32:48.897223 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 24 00:32:48.897238 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 24 00:32:48.897253 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 24 00:32:48.897273 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Nov 24 00:32:48.897289 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Nov 24 00:32:48.897305 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Nov 24 00:32:48.897321 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Nov 24 00:32:48.897337 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Nov 24 00:32:48.897354 kernel: Freeing SMP alternatives memory: 32K Nov 24 00:32:48.897369 kernel: pid_max: default: 32768 minimum: 301 Nov 24 00:32:48.897385 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 24 00:32:48.897400 kernel: landlock: Up and running. Nov 24 00:32:48.897416 kernel: SELinux: Initializing. Nov 24 00:32:48.897430 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 24 00:32:48.897446 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 24 00:32:48.897465 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 24 00:32:48.897480 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 24 00:32:48.897496 kernel: signal: max sigframe size: 3632 Nov 24 00:32:48.897512 kernel: rcu: Hierarchical SRCU implementation. Nov 24 00:32:48.897529 kernel: rcu: Max phase no-delay instances is 400. Nov 24 00:32:48.897545 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 24 00:32:48.897561 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 24 00:32:48.897576 kernel: smp: Bringing up secondary CPUs ... Nov 24 00:32:48.897592 kernel: smpboot: x86: Booting SMP configuration: Nov 24 00:32:48.897610 kernel: .... node #0, CPUs: #1 Nov 24 00:32:48.897627 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 24 00:32:48.897644 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 24 00:32:48.897660 kernel: smp: Brought up 1 node, 2 CPUs Nov 24 00:32:48.897675 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Nov 24 00:32:48.897692 kernel: Memory: 1899860K/2037804K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 133380K reserved, 0K cma-reserved) Nov 24 00:32:48.897708 kernel: devtmpfs: initialized Nov 24 00:32:48.897724 kernel: x86/mm: Memory block size: 128MB Nov 24 00:32:48.897743 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Nov 24 00:32:48.897758 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 24 00:32:48.897775 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 24 00:32:48.897790 kernel: pinctrl core: initialized pinctrl subsystem Nov 24 00:32:48.897806 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 24 00:32:48.897821 kernel: audit: initializing netlink subsys (disabled) Nov 24 00:32:48.902258 kernel: audit: type=2000 audit(1763944367.272:1): state=initialized audit_enabled=0 res=1 Nov 24 00:32:48.902306 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 24 00:32:48.902324 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 24 00:32:48.902349 kernel: cpuidle: using governor menu Nov 24 00:32:48.902365 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 24 00:32:48.902381 kernel: dca service started, version 1.12.1 Nov 24 00:32:48.902397 kernel: PCI: Using configuration type 1 for base access Nov 24 00:32:48.902414 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 24 00:32:48.902430 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 24 00:32:48.902447 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 24 00:32:48.902463 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 24 00:32:48.902479 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 24 00:32:48.902498 kernel: ACPI: Added _OSI(Module Device) Nov 24 00:32:48.902514 kernel: ACPI: Added _OSI(Processor Device) Nov 24 00:32:48.902530 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 24 00:32:48.902545 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 24 00:32:48.902561 kernel: ACPI: Interpreter enabled Nov 24 00:32:48.902576 kernel: ACPI: PM: (supports S0 S5) Nov 24 00:32:48.902593 kernel: ACPI: Using IOAPIC for interrupt routing Nov 24 00:32:48.902609 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 24 00:32:48.902625 kernel: PCI: Using E820 reservations for host bridge windows Nov 24 00:32:48.902644 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 24 00:32:48.902658 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 24 00:32:48.902983 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 24 00:32:48.903142 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 24 00:32:48.903282 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 24 00:32:48.903303 kernel: acpiphp: Slot [3] registered Nov 24 00:32:48.903320 kernel: acpiphp: Slot [4] registered Nov 24 00:32:48.903340 kernel: acpiphp: Slot [5] registered Nov 24 00:32:48.903354 kernel: acpiphp: Slot [6] registered Nov 24 00:32:48.903368 kernel: acpiphp: Slot [7] registered Nov 24 00:32:48.903382 kernel: acpiphp: Slot [8] registered Nov 24 00:32:48.903396 kernel: acpiphp: Slot [9] registered Nov 24 00:32:48.903411 kernel: acpiphp: Slot [10] registered Nov 24 00:32:48.903425 kernel: acpiphp: Slot [11] registered Nov 24 00:32:48.903439 kernel: acpiphp: Slot [12] registered Nov 24 00:32:48.903453 kernel: acpiphp: Slot [13] registered Nov 24 00:32:48.903467 kernel: acpiphp: Slot [14] registered Nov 24 00:32:48.903485 kernel: acpiphp: Slot [15] registered Nov 24 00:32:48.903499 kernel: acpiphp: Slot [16] registered Nov 24 00:32:48.903513 kernel: acpiphp: Slot [17] registered Nov 24 00:32:48.903528 kernel: acpiphp: Slot [18] registered Nov 24 00:32:48.903542 kernel: acpiphp: Slot [19] registered Nov 24 00:32:48.903556 kernel: acpiphp: Slot [20] registered Nov 24 00:32:48.903571 kernel: acpiphp: Slot [21] registered Nov 24 00:32:48.903585 kernel: acpiphp: Slot [22] registered Nov 24 00:32:48.903600 kernel: acpiphp: Slot [23] registered Nov 24 00:32:48.903617 kernel: acpiphp: Slot [24] registered Nov 24 00:32:48.903632 kernel: acpiphp: Slot [25] registered Nov 24 00:32:48.903647 kernel: acpiphp: Slot [26] registered Nov 24 00:32:48.903662 kernel: acpiphp: Slot [27] registered Nov 24 00:32:48.903676 kernel: acpiphp: Slot [28] registered Nov 24 00:32:48.903691 kernel: acpiphp: Slot [29] registered Nov 24 00:32:48.903705 kernel: acpiphp: Slot [30] registered Nov 24 00:32:48.903720 kernel: acpiphp: Slot [31] registered Nov 24 00:32:48.903734 kernel: PCI host bridge to bus 0000:00 Nov 24 00:32:48.905949 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 24 00:32:48.906121 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 24 00:32:48.906248 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 24 00:32:48.906372 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 24 00:32:48.906494 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Nov 24 00:32:48.906616 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 24 00:32:48.906779 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Nov 24 00:32:48.906962 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Nov 24 00:32:48.907109 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Nov 24 00:32:48.907245 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 24 00:32:48.907379 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Nov 24 00:32:48.907512 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Nov 24 00:32:48.907648 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Nov 24 00:32:48.907786 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Nov 24 00:32:48.907978 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Nov 24 00:32:48.908114 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Nov 24 00:32:48.908260 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Nov 24 00:32:48.908404 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Nov 24 00:32:48.908539 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Nov 24 00:32:48.908672 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 24 00:32:48.908821 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Nov 24 00:32:48.908971 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Nov 24 00:32:48.909116 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Nov 24 00:32:48.910077 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Nov 24 00:32:48.910102 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 24 00:32:48.910118 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 24 00:32:48.910134 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 24 00:32:48.910153 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 24 00:32:48.910169 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 24 00:32:48.910184 kernel: iommu: Default domain type: Translated Nov 24 00:32:48.910200 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 24 00:32:48.910215 kernel: efivars: Registered efivars operations Nov 24 00:32:48.910230 kernel: PCI: Using ACPI for IRQ routing Nov 24 00:32:48.910245 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 24 00:32:48.910259 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Nov 24 00:32:48.910274 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Nov 24 00:32:48.910292 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Nov 24 00:32:48.910440 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Nov 24 00:32:48.910576 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Nov 24 00:32:48.910715 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 24 00:32:48.910734 kernel: vgaarb: loaded Nov 24 00:32:48.910750 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Nov 24 00:32:48.910767 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Nov 24 00:32:48.910783 kernel: clocksource: Switched to clocksource kvm-clock Nov 24 00:32:48.910799 kernel: VFS: Disk quotas dquot_6.6.0 Nov 24 00:32:48.910819 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 24 00:32:48.910835 kernel: pnp: PnP ACPI init Nov 24 00:32:48.911890 kernel: pnp: PnP ACPI: found 5 devices Nov 24 00:32:48.911907 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 24 00:32:48.911923 kernel: NET: Registered PF_INET protocol family Nov 24 00:32:48.911937 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 24 00:32:48.911952 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 24 00:32:48.911966 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 24 00:32:48.911981 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 24 00:32:48.912000 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 24 00:32:48.912015 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 24 00:32:48.912031 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 24 00:32:48.912046 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 24 00:32:48.912061 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 24 00:32:48.912076 kernel: NET: Registered PF_XDP protocol family Nov 24 00:32:48.912227 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 24 00:32:48.912360 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 24 00:32:48.912483 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 24 00:32:48.912597 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 24 00:32:48.912706 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Nov 24 00:32:48.912834 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 24 00:32:48.912871 kernel: PCI: CLS 0 bytes, default 64 Nov 24 00:32:48.912886 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 24 00:32:48.912901 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Nov 24 00:32:48.912916 kernel: clocksource: Switched to clocksource tsc Nov 24 00:32:48.912935 kernel: Initialise system trusted keyrings Nov 24 00:32:48.912950 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 24 00:32:48.912964 kernel: Key type asymmetric registered Nov 24 00:32:48.912979 kernel: Asymmetric key parser 'x509' registered Nov 24 00:32:48.912993 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 24 00:32:48.913008 kernel: io scheduler mq-deadline registered Nov 24 00:32:48.913022 kernel: io scheduler kyber registered Nov 24 00:32:48.913036 kernel: io scheduler bfq registered Nov 24 00:32:48.913051 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 24 00:32:48.913068 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 24 00:32:48.913082 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 24 00:32:48.913097 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 24 00:32:48.913111 kernel: i8042: Warning: Keylock active Nov 24 00:32:48.913125 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 24 00:32:48.913139 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 24 00:32:48.913270 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 24 00:32:48.913386 kernel: rtc_cmos 00:00: registered as rtc0 Nov 24 00:32:48.913501 kernel: rtc_cmos 00:00: setting system clock to 2025-11-24T00:32:48 UTC (1763944368) Nov 24 00:32:48.913612 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 24 00:32:48.913650 kernel: intel_pstate: CPU model not supported Nov 24 00:32:48.913668 kernel: efifb: probing for efifb Nov 24 00:32:48.913683 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Nov 24 00:32:48.913699 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Nov 24 00:32:48.913714 kernel: efifb: scrolling: redraw Nov 24 00:32:48.913729 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 24 00:32:48.913745 kernel: Console: switching to colour frame buffer device 100x37 Nov 24 00:32:48.913762 kernel: fb0: EFI VGA frame buffer device Nov 24 00:32:48.913777 kernel: pstore: Using crash dump compression: deflate Nov 24 00:32:48.913793 kernel: pstore: Registered efi_pstore as persistent store backend Nov 24 00:32:48.913808 kernel: NET: Registered PF_INET6 protocol family Nov 24 00:32:48.913823 kernel: Segment Routing with IPv6 Nov 24 00:32:48.913838 kernel: In-situ OAM (IOAM) with IPv6 Nov 24 00:32:48.916747 kernel: NET: Registered PF_PACKET protocol family Nov 24 00:32:48.916763 kernel: Key type dns_resolver registered Nov 24 00:32:48.916778 kernel: IPI shorthand broadcast: enabled Nov 24 00:32:48.916798 kernel: sched_clock: Marking stable (2712002154, 147364754)->(2942351345, -82984437) Nov 24 00:32:48.916813 kernel: registered taskstats version 1 Nov 24 00:32:48.916836 kernel: Loading compiled-in X.509 certificates Nov 24 00:32:48.916871 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 5d380f93d180914be04be8068ab300f495c35900' Nov 24 00:32:48.916885 kernel: Demotion targets for Node 0: null Nov 24 00:32:48.916898 kernel: Key type .fscrypt registered Nov 24 00:32:48.916913 kernel: Key type fscrypt-provisioning registered Nov 24 00:32:48.916927 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 24 00:32:48.916943 kernel: ima: Allocated hash algorithm: sha1 Nov 24 00:32:48.916960 kernel: ima: No architecture policies found Nov 24 00:32:48.916975 kernel: clk: Disabling unused clocks Nov 24 00:32:48.916991 kernel: Warning: unable to open an initial console. Nov 24 00:32:48.917007 kernel: Freeing unused kernel image (initmem) memory: 46188K Nov 24 00:32:48.917023 kernel: Write protecting the kernel read-only data: 40960k Nov 24 00:32:48.917041 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 24 00:32:48.917060 kernel: Run /init as init process Nov 24 00:32:48.917077 kernel: with arguments: Nov 24 00:32:48.917093 kernel: /init Nov 24 00:32:48.917109 kernel: with environment: Nov 24 00:32:48.917125 kernel: HOME=/ Nov 24 00:32:48.917142 kernel: TERM=linux Nov 24 00:32:48.917161 systemd[1]: Successfully made /usr/ read-only. Nov 24 00:32:48.917182 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 00:32:48.917204 systemd[1]: Detected virtualization amazon. Nov 24 00:32:48.917221 systemd[1]: Detected architecture x86-64. Nov 24 00:32:48.917237 systemd[1]: Running in initrd. Nov 24 00:32:48.917254 systemd[1]: No hostname configured, using default hostname. Nov 24 00:32:48.917272 systemd[1]: Hostname set to . Nov 24 00:32:48.917292 systemd[1]: Initializing machine ID from VM UUID. Nov 24 00:32:48.917310 systemd[1]: Queued start job for default target initrd.target. Nov 24 00:32:48.917327 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:32:48.917349 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:32:48.917368 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 24 00:32:48.917386 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 00:32:48.917404 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 24 00:32:48.917424 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 24 00:32:48.917440 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 24 00:32:48.917459 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 24 00:32:48.917477 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:32:48.917495 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:32:48.917512 systemd[1]: Reached target paths.target - Path Units. Nov 24 00:32:48.917529 systemd[1]: Reached target slices.target - Slice Units. Nov 24 00:32:48.917546 systemd[1]: Reached target swap.target - Swaps. Nov 24 00:32:48.917564 systemd[1]: Reached target timers.target - Timer Units. Nov 24 00:32:48.917581 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 00:32:48.917600 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 00:32:48.917620 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 24 00:32:48.917638 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 24 00:32:48.917652 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:32:48.917669 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 00:32:48.917686 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:32:48.917702 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 00:32:48.917719 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 24 00:32:48.917736 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 00:32:48.917755 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 24 00:32:48.917772 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 24 00:32:48.917789 systemd[1]: Starting systemd-fsck-usr.service... Nov 24 00:32:48.917806 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 00:32:48.917822 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 00:32:48.917839 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:32:48.917871 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 24 00:32:48.917892 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:32:48.917943 systemd-journald[188]: Collecting audit messages is disabled. Nov 24 00:32:48.917984 systemd[1]: Finished systemd-fsck-usr.service. Nov 24 00:32:48.918001 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 24 00:32:48.918020 systemd-journald[188]: Journal started Nov 24 00:32:48.918054 systemd-journald[188]: Runtime Journal (/run/log/journal/ec27e9ea4292c70281785eb094ec148a) is 4.7M, max 38.1M, 33.3M free. Nov 24 00:32:48.916884 systemd-modules-load[189]: Inserted module 'overlay' Nov 24 00:32:48.926877 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 00:32:48.931631 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:32:48.936996 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 24 00:32:48.943038 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 00:32:48.950252 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 00:32:48.961135 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 00:32:48.973522 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 24 00:32:48.973561 kernel: Bridge firewalling registered Nov 24 00:32:48.964385 systemd-modules-load[189]: Inserted module 'br_netfilter' Nov 24 00:32:48.976618 systemd-tmpfiles[204]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 24 00:32:48.980955 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 00:32:48.987058 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 00:32:48.995496 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:32:48.996695 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:32:49.000184 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 00:32:49.004420 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 24 00:32:49.009900 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 24 00:32:49.010281 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:32:49.020034 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 00:32:49.035583 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1969a6ee0c0ec5507eb68849c160e94c58e52d2291c767873af68a1f52b30801 Nov 24 00:32:49.080629 systemd-resolved[228]: Positive Trust Anchors: Nov 24 00:32:49.081621 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 00:32:49.081684 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 00:32:49.089020 systemd-resolved[228]: Defaulting to hostname 'linux'. Nov 24 00:32:49.092434 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 00:32:49.093177 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:32:49.140882 kernel: SCSI subsystem initialized Nov 24 00:32:49.150872 kernel: Loading iSCSI transport class v2.0-870. Nov 24 00:32:49.162887 kernel: iscsi: registered transport (tcp) Nov 24 00:32:49.185895 kernel: iscsi: registered transport (qla4xxx) Nov 24 00:32:49.185973 kernel: QLogic iSCSI HBA Driver Nov 24 00:32:49.209735 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 00:32:49.230901 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:32:49.234627 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 00:32:49.285111 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 24 00:32:49.287346 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 24 00:32:49.340877 kernel: raid6: avx512x4 gen() 16245 MB/s Nov 24 00:32:49.358880 kernel: raid6: avx512x2 gen() 14520 MB/s Nov 24 00:32:49.376875 kernel: raid6: avx512x1 gen() 15421 MB/s Nov 24 00:32:49.394869 kernel: raid6: avx2x4 gen() 16204 MB/s Nov 24 00:32:49.412879 kernel: raid6: avx2x2 gen() 13907 MB/s Nov 24 00:32:49.431144 kernel: raid6: avx2x1 gen() 12244 MB/s Nov 24 00:32:49.431221 kernel: raid6: using algorithm avx512x4 gen() 16245 MB/s Nov 24 00:32:49.450237 kernel: raid6: .... xor() 7038 MB/s, rmw enabled Nov 24 00:32:49.450321 kernel: raid6: using avx512x2 recovery algorithm Nov 24 00:32:49.472890 kernel: xor: automatically using best checksumming function avx Nov 24 00:32:49.650943 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 24 00:32:49.658413 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 24 00:32:49.660769 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:32:49.688490 systemd-udevd[438]: Using default interface naming scheme 'v255'. Nov 24 00:32:49.695268 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:32:49.700079 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 24 00:32:49.735251 dracut-pre-trigger[444]: rd.md=0: removing MD RAID activation Nov 24 00:32:49.739226 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Nov 24 00:32:49.768163 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 00:32:49.770324 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 00:32:49.830590 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:32:49.835617 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 24 00:32:49.918863 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 24 00:32:49.922488 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 24 00:32:49.931836 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 24 00:32:49.932143 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 24 00:32:49.940927 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Nov 24 00:32:49.941189 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 24 00:32:49.953156 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 24 00:32:49.953230 kernel: GPT:9289727 != 33554431 Nov 24 00:32:49.953251 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:e2:43:06:ca:4b Nov 24 00:32:49.953513 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 24 00:32:49.958159 kernel: GPT:9289727 != 33554431 Nov 24 00:32:49.958224 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 24 00:32:49.960405 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 24 00:32:49.962974 kernel: cryptd: max_cpu_qlen set to 1000 Nov 24 00:32:49.970763 (udev-worker)[489]: Network interface NamePolicy= disabled on kernel command line. Nov 24 00:32:49.973388 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:32:49.973584 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:32:49.975537 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:32:49.979348 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:32:49.982678 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:32:50.001591 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:32:50.004738 kernel: AES CTR mode by8 optimization enabled Nov 24 00:32:50.003004 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:32:50.007373 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:32:50.027735 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:32:50.057073 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:32:50.059980 kernel: nvme nvme0: using unchecked data buffer Nov 24 00:32:50.167666 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 24 00:32:50.216198 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 24 00:32:50.217177 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 24 00:32:50.236951 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Nov 24 00:32:50.237507 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 24 00:32:50.249469 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 24 00:32:50.250158 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 00:32:50.251461 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:32:50.252766 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 00:32:50.254580 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 24 00:32:50.256993 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 24 00:32:50.275414 disk-uuid[678]: Primary Header is updated. Nov 24 00:32:50.275414 disk-uuid[678]: Secondary Entries is updated. Nov 24 00:32:50.275414 disk-uuid[678]: Secondary Header is updated. Nov 24 00:32:50.285378 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 24 00:32:50.284669 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 24 00:32:51.299091 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 24 00:32:51.299154 disk-uuid[680]: The operation has completed successfully. Nov 24 00:32:51.438774 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 24 00:32:51.438934 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 24 00:32:51.478809 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 24 00:32:51.496430 sh[946]: Success Nov 24 00:32:51.523020 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 24 00:32:51.523111 kernel: device-mapper: uevent: version 1.0.3 Nov 24 00:32:51.527347 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 24 00:32:51.537866 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Nov 24 00:32:51.656023 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 24 00:32:51.659952 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 24 00:32:51.676750 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 24 00:32:51.695886 kernel: BTRFS: device fsid c993ebd2-0e38-4cfc-8615-2c75294bea72 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (969) Nov 24 00:32:51.698881 kernel: BTRFS info (device dm-0): first mount of filesystem c993ebd2-0e38-4cfc-8615-2c75294bea72 Nov 24 00:32:51.698971 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:32:51.805192 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 24 00:32:51.805271 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 24 00:32:51.807773 kernel: BTRFS info (device dm-0): enabling free space tree Nov 24 00:32:51.832050 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 24 00:32:51.833219 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 24 00:32:51.833763 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 24 00:32:51.834533 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 24 00:32:51.836084 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 24 00:32:51.866895 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1002) Nov 24 00:32:51.871325 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8f3e7759-f869-465c-a676-2cd550a2d4e4 Nov 24 00:32:51.871394 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:32:51.888544 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 24 00:32:51.888631 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 24 00:32:51.895897 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 8f3e7759-f869-465c-a676-2cd550a2d4e4 Nov 24 00:32:51.898809 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 24 00:32:51.901604 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 24 00:32:51.941050 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 00:32:51.943959 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 00:32:51.993677 systemd-networkd[1138]: lo: Link UP Nov 24 00:32:51.993690 systemd-networkd[1138]: lo: Gained carrier Nov 24 00:32:51.999253 systemd-networkd[1138]: Enumeration completed Nov 24 00:32:51.999992 systemd-networkd[1138]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:32:52.000000 systemd-networkd[1138]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 00:32:52.000089 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 00:32:52.002100 systemd[1]: Reached target network.target - Network. Nov 24 00:32:52.003592 systemd-networkd[1138]: eth0: Link UP Nov 24 00:32:52.003597 systemd-networkd[1138]: eth0: Gained carrier Nov 24 00:32:52.003614 systemd-networkd[1138]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:32:52.012951 systemd-networkd[1138]: eth0: DHCPv4 address 172.31.20.18/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 24 00:32:52.303059 ignition[1089]: Ignition 2.22.0 Nov 24 00:32:52.303078 ignition[1089]: Stage: fetch-offline Nov 24 00:32:52.303321 ignition[1089]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:32:52.303335 ignition[1089]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 24 00:32:52.303821 ignition[1089]: Ignition finished successfully Nov 24 00:32:52.306538 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 00:32:52.308214 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 24 00:32:52.340238 ignition[1148]: Ignition 2.22.0 Nov 24 00:32:52.340905 ignition[1148]: Stage: fetch Nov 24 00:32:52.341310 ignition[1148]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:32:52.341323 ignition[1148]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 24 00:32:52.341438 ignition[1148]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 24 00:32:52.370790 ignition[1148]: PUT result: OK Nov 24 00:32:52.374703 ignition[1148]: parsed url from cmdline: "" Nov 24 00:32:52.374717 ignition[1148]: no config URL provided Nov 24 00:32:52.374729 ignition[1148]: reading system config file "/usr/lib/ignition/user.ign" Nov 24 00:32:52.374744 ignition[1148]: no config at "/usr/lib/ignition/user.ign" Nov 24 00:32:52.374791 ignition[1148]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 24 00:32:52.376608 ignition[1148]: PUT result: OK Nov 24 00:32:52.376679 ignition[1148]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 24 00:32:52.377660 ignition[1148]: GET result: OK Nov 24 00:32:52.377775 ignition[1148]: parsing config with SHA512: ce0a3f64e7770a649416a343db413938ddc06daefe634051731b615aa79799c2ff55728a9054175794ec4958ae07b1c7500468449943e87010ff6713ee4f2c39 Nov 24 00:32:52.385290 unknown[1148]: fetched base config from "system" Nov 24 00:32:52.385305 unknown[1148]: fetched base config from "system" Nov 24 00:32:52.385813 ignition[1148]: fetch: fetch complete Nov 24 00:32:52.385312 unknown[1148]: fetched user config from "aws" Nov 24 00:32:52.385821 ignition[1148]: fetch: fetch passed Nov 24 00:32:52.385897 ignition[1148]: Ignition finished successfully Nov 24 00:32:52.389378 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 24 00:32:52.391066 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 24 00:32:52.427299 ignition[1155]: Ignition 2.22.0 Nov 24 00:32:52.427313 ignition[1155]: Stage: kargs Nov 24 00:32:52.427686 ignition[1155]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:32:52.427698 ignition[1155]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 24 00:32:52.427807 ignition[1155]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 24 00:32:52.428936 ignition[1155]: PUT result: OK Nov 24 00:32:52.432111 ignition[1155]: kargs: kargs passed Nov 24 00:32:52.432188 ignition[1155]: Ignition finished successfully Nov 24 00:32:52.434217 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 24 00:32:52.436301 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 24 00:32:52.469687 ignition[1162]: Ignition 2.22.0 Nov 24 00:32:52.469705 ignition[1162]: Stage: disks Nov 24 00:32:52.470108 ignition[1162]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:32:52.470120 ignition[1162]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 24 00:32:52.470697 ignition[1162]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 24 00:32:52.471773 ignition[1162]: PUT result: OK Nov 24 00:32:52.475757 ignition[1162]: disks: disks passed Nov 24 00:32:52.475856 ignition[1162]: Ignition finished successfully Nov 24 00:32:52.477686 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 24 00:32:52.478364 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 24 00:32:52.478750 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 24 00:32:52.479468 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 00:32:52.480086 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 00:32:52.481145 systemd[1]: Reached target basic.target - Basic System. Nov 24 00:32:52.482683 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 24 00:32:52.525265 systemd-fsck[1171]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 24 00:32:52.527958 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 24 00:32:52.530768 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 24 00:32:52.685894 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 5d9d0447-100f-4769-adb5-76fdba966eb2 r/w with ordered data mode. Quota mode: none. Nov 24 00:32:52.685970 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 24 00:32:52.687224 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 24 00:32:52.689394 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 00:32:52.692287 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 24 00:32:52.696125 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 24 00:32:52.697446 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 24 00:32:52.697488 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 00:32:52.709886 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 24 00:32:52.714793 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 24 00:32:52.724870 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1190) Nov 24 00:32:52.728984 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8f3e7759-f869-465c-a676-2cd550a2d4e4 Nov 24 00:32:52.729061 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:32:52.736922 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 24 00:32:52.737002 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 24 00:32:52.739711 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 00:32:53.075902 initrd-setup-root[1214]: cut: /sysroot/etc/passwd: No such file or directory Nov 24 00:32:53.103878 initrd-setup-root[1221]: cut: /sysroot/etc/group: No such file or directory Nov 24 00:32:53.122886 initrd-setup-root[1228]: cut: /sysroot/etc/shadow: No such file or directory Nov 24 00:32:53.128694 initrd-setup-root[1235]: cut: /sysroot/etc/gshadow: No such file or directory Nov 24 00:32:53.509793 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 24 00:32:53.511973 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 24 00:32:53.515006 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 24 00:32:53.536111 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 24 00:32:53.538942 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 8f3e7759-f869-465c-a676-2cd550a2d4e4 Nov 24 00:32:53.566837 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 24 00:32:53.577410 ignition[1302]: INFO : Ignition 2.22.0 Nov 24 00:32:53.577410 ignition[1302]: INFO : Stage: mount Nov 24 00:32:53.578897 ignition[1302]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:32:53.578897 ignition[1302]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 24 00:32:53.578897 ignition[1302]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 24 00:32:53.578897 ignition[1302]: INFO : PUT result: OK Nov 24 00:32:53.581972 ignition[1302]: INFO : mount: mount passed Nov 24 00:32:53.582935 ignition[1302]: INFO : Ignition finished successfully Nov 24 00:32:53.583718 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 24 00:32:53.585761 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 24 00:32:53.612137 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 00:32:53.667867 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1314) Nov 24 00:32:53.670983 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8f3e7759-f869-465c-a676-2cd550a2d4e4 Nov 24 00:32:53.671058 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:32:53.685937 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 24 00:32:53.686019 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 24 00:32:53.689110 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 00:32:53.787133 ignition[1331]: INFO : Ignition 2.22.0 Nov 24 00:32:53.787133 ignition[1331]: INFO : Stage: files Nov 24 00:32:53.788952 ignition[1331]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:32:53.788952 ignition[1331]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 24 00:32:53.788952 ignition[1331]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 24 00:32:53.788952 ignition[1331]: INFO : PUT result: OK Nov 24 00:32:53.793832 ignition[1331]: DEBUG : files: compiled without relabeling support, skipping Nov 24 00:32:53.794926 ignition[1331]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 24 00:32:53.794926 ignition[1331]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 24 00:32:53.806244 ignition[1331]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 24 00:32:53.807493 ignition[1331]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 24 00:32:53.807493 ignition[1331]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 24 00:32:53.806883 unknown[1331]: wrote ssh authorized keys file for user: core Nov 24 00:32:53.821636 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 24 00:32:53.823684 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 24 00:32:53.887382 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 24 00:32:53.998187 systemd-networkd[1138]: eth0: Gained IPv6LL Nov 24 00:32:54.049808 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 24 00:32:54.049808 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 24 00:32:54.053521 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 24 00:32:54.053521 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 24 00:32:54.053521 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 24 00:32:54.053521 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 00:32:54.053521 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 00:32:54.053521 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 00:32:54.059428 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 00:32:54.060788 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 00:32:54.060788 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 00:32:54.060788 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 24 00:32:54.063810 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 24 00:32:54.063810 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 24 00:32:54.063810 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 24 00:32:54.497893 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 24 00:32:55.061480 ignition[1331]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 24 00:32:55.061480 ignition[1331]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 24 00:32:55.063753 ignition[1331]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 00:32:55.067812 ignition[1331]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 00:32:55.067812 ignition[1331]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 24 00:32:55.067812 ignition[1331]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 24 00:32:55.071595 ignition[1331]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 24 00:32:55.071595 ignition[1331]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 24 00:32:55.071595 ignition[1331]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 24 00:32:55.071595 ignition[1331]: INFO : files: files passed Nov 24 00:32:55.071595 ignition[1331]: INFO : Ignition finished successfully Nov 24 00:32:55.070716 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 24 00:32:55.075010 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 24 00:32:55.080406 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 24 00:32:55.092520 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 24 00:32:55.092668 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 24 00:32:55.108934 initrd-setup-root-after-ignition[1362]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:32:55.108934 initrd-setup-root-after-ignition[1362]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:32:55.112207 initrd-setup-root-after-ignition[1366]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:32:55.112982 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 00:32:55.114436 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 24 00:32:55.116243 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 24 00:32:55.184978 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 24 00:32:55.185129 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 24 00:32:55.186426 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 24 00:32:55.187596 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 24 00:32:55.188641 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 24 00:32:55.190245 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 24 00:32:55.226784 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 00:32:55.229316 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 24 00:32:55.250620 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:32:55.251541 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:32:55.252753 systemd[1]: Stopped target timers.target - Timer Units. Nov 24 00:32:55.253695 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 24 00:32:55.253962 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 00:32:55.255158 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 24 00:32:55.256104 systemd[1]: Stopped target basic.target - Basic System. Nov 24 00:32:55.257224 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 24 00:32:55.258027 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 00:32:55.258826 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 24 00:32:55.259611 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 24 00:32:55.260505 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 24 00:32:55.261353 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 00:32:55.262213 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 24 00:32:55.263382 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 24 00:32:55.264206 systemd[1]: Stopped target swap.target - Swaps. Nov 24 00:32:55.265009 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 24 00:32:55.265235 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 24 00:32:55.266285 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:32:55.267302 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:32:55.267945 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 24 00:32:55.268786 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:32:55.269407 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 24 00:32:55.269626 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 24 00:32:55.271011 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 24 00:32:55.271211 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 00:32:55.271993 systemd[1]: ignition-files.service: Deactivated successfully. Nov 24 00:32:55.272192 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 24 00:32:55.274981 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 24 00:32:55.275581 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 24 00:32:55.275826 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:32:55.279115 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 24 00:32:55.279713 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 24 00:32:55.280084 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:32:55.283231 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 24 00:32:55.283449 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 00:32:55.289971 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 24 00:32:55.290098 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 24 00:32:55.315673 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 24 00:32:55.320127 ignition[1386]: INFO : Ignition 2.22.0 Nov 24 00:32:55.320127 ignition[1386]: INFO : Stage: umount Nov 24 00:32:55.320127 ignition[1386]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:32:55.320127 ignition[1386]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 24 00:32:55.320127 ignition[1386]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 24 00:32:55.324860 ignition[1386]: INFO : PUT result: OK Nov 24 00:32:55.323928 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 24 00:32:55.327866 ignition[1386]: INFO : umount: umount passed Nov 24 00:32:55.327866 ignition[1386]: INFO : Ignition finished successfully Nov 24 00:32:55.324469 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 24 00:32:55.329961 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 24 00:32:55.330145 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 24 00:32:55.331092 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 24 00:32:55.331160 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 24 00:32:55.331602 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 24 00:32:55.331662 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 24 00:32:55.332326 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 24 00:32:55.332385 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 24 00:32:55.333112 systemd[1]: Stopped target network.target - Network. Nov 24 00:32:55.333982 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 24 00:32:55.334053 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 00:32:55.334653 systemd[1]: Stopped target paths.target - Path Units. Nov 24 00:32:55.335273 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 24 00:32:55.340972 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:32:55.341727 systemd[1]: Stopped target slices.target - Slice Units. Nov 24 00:32:55.343312 systemd[1]: Stopped target sockets.target - Socket Units. Nov 24 00:32:55.344085 systemd[1]: iscsid.socket: Deactivated successfully. Nov 24 00:32:55.344151 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 00:32:55.345102 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 24 00:32:55.345157 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 00:32:55.345955 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 24 00:32:55.346042 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 24 00:32:55.346748 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 24 00:32:55.346813 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 24 00:32:55.347423 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 24 00:32:55.347493 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 24 00:32:55.348268 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 24 00:32:55.349056 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 24 00:32:55.356906 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 24 00:32:55.357063 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 24 00:32:55.359337 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 24 00:32:55.359663 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 24 00:32:55.359794 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 24 00:32:55.365324 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 24 00:32:55.367696 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 24 00:32:55.368510 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 24 00:32:55.368570 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:32:55.370489 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 24 00:32:55.372486 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 24 00:32:55.372571 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 00:32:55.373205 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 24 00:32:55.373263 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:32:55.376048 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 24 00:32:55.376120 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 24 00:32:55.376972 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 24 00:32:55.377033 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:32:55.378076 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:32:55.381294 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 24 00:32:55.381393 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:32:55.398620 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 24 00:32:55.399024 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:32:55.400973 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 24 00:32:55.401093 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 24 00:32:55.403620 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 24 00:32:55.403712 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 24 00:32:55.404912 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 24 00:32:55.404963 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:32:55.405618 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 24 00:32:55.405693 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 24 00:32:55.406819 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 24 00:32:55.406907 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 24 00:32:55.408090 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 24 00:32:55.408166 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 00:32:55.410404 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 24 00:32:55.411905 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 24 00:32:55.411988 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:32:55.416628 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 24 00:32:55.417240 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:32:55.418520 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:32:55.418594 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:32:55.421469 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 24 00:32:55.421545 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 24 00:32:55.421597 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:32:55.429141 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 24 00:32:55.429267 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 24 00:32:55.430752 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 24 00:32:55.434040 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 24 00:32:55.470498 systemd[1]: Switching root. Nov 24 00:32:55.541828 systemd-journald[188]: Journal stopped Nov 24 00:32:57.426739 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Nov 24 00:32:57.426831 kernel: SELinux: policy capability network_peer_controls=1 Nov 24 00:32:57.428226 kernel: SELinux: policy capability open_perms=1 Nov 24 00:32:57.428261 kernel: SELinux: policy capability extended_socket_class=1 Nov 24 00:32:57.428279 kernel: SELinux: policy capability always_check_network=0 Nov 24 00:32:57.428309 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 24 00:32:57.428332 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 24 00:32:57.428355 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 24 00:32:57.428377 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 24 00:32:57.428394 kernel: SELinux: policy capability userspace_initial_context=0 Nov 24 00:32:57.428412 kernel: audit: type=1403 audit(1763944376.028:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 24 00:32:57.428433 systemd[1]: Successfully loaded SELinux policy in 81.618ms. Nov 24 00:32:57.428459 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.399ms. Nov 24 00:32:57.428480 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 00:32:57.428500 systemd[1]: Detected virtualization amazon. Nov 24 00:32:57.428519 systemd[1]: Detected architecture x86-64. Nov 24 00:32:57.428537 systemd[1]: Detected first boot. Nov 24 00:32:57.428556 systemd[1]: Initializing machine ID from VM UUID. Nov 24 00:32:57.428576 kernel: Guest personality initialized and is inactive Nov 24 00:32:57.428593 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Nov 24 00:32:57.428610 kernel: Initialized host personality Nov 24 00:32:57.428635 zram_generator::config[1430]: No configuration found. Nov 24 00:32:57.428657 kernel: NET: Registered PF_VSOCK protocol family Nov 24 00:32:57.428675 systemd[1]: Populated /etc with preset unit settings. Nov 24 00:32:57.428694 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 24 00:32:57.428718 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 24 00:32:57.428736 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 24 00:32:57.428756 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 24 00:32:57.428774 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 24 00:32:57.428796 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 24 00:32:57.428818 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 24 00:32:57.428837 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 24 00:32:57.428869 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 24 00:32:57.428888 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 24 00:32:57.428906 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 24 00:32:57.428925 systemd[1]: Created slice user.slice - User and Session Slice. Nov 24 00:32:57.428944 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:32:57.428964 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:32:57.428986 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 24 00:32:57.429006 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 24 00:32:57.429026 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 24 00:32:57.429045 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 00:32:57.429064 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 24 00:32:57.429085 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:32:57.429103 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:32:57.429122 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 24 00:32:57.429144 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 24 00:32:57.429898 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 24 00:32:57.429928 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 24 00:32:57.429948 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:32:57.429967 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 00:32:57.429985 systemd[1]: Reached target slices.target - Slice Units. Nov 24 00:32:57.430003 systemd[1]: Reached target swap.target - Swaps. Nov 24 00:32:57.430021 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 24 00:32:57.430040 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 24 00:32:57.430065 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 24 00:32:57.430085 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:32:57.430105 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 00:32:57.430125 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:32:57.430146 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 24 00:32:57.430166 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 24 00:32:57.430184 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 24 00:32:57.430205 systemd[1]: Mounting media.mount - External Media Directory... Nov 24 00:32:57.430223 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:32:57.430247 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 24 00:32:57.430268 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 24 00:32:57.430288 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 24 00:32:57.430307 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 24 00:32:57.430328 systemd[1]: Reached target machines.target - Containers. Nov 24 00:32:57.430348 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 24 00:32:57.430368 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:32:57.430388 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 00:32:57.430413 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 24 00:32:57.430433 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:32:57.430455 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 00:32:57.430475 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:32:57.430496 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 24 00:32:57.430518 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:32:57.430540 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 24 00:32:57.430561 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 24 00:32:57.430587 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 24 00:32:57.430610 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 24 00:32:57.430631 systemd[1]: Stopped systemd-fsck-usr.service. Nov 24 00:32:57.430655 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:32:57.430676 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 00:32:57.430699 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 00:32:57.430722 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 00:32:57.430744 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 24 00:32:57.430766 kernel: loop: module loaded Nov 24 00:32:57.430791 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 24 00:32:57.430813 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 00:32:57.432007 systemd[1]: verity-setup.service: Deactivated successfully. Nov 24 00:32:57.432048 systemd[1]: Stopped verity-setup.service. Nov 24 00:32:57.432069 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:32:57.432087 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 24 00:32:57.432105 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 24 00:32:57.432168 systemd-journald[1513]: Collecting audit messages is disabled. Nov 24 00:32:57.432213 systemd[1]: Mounted media.mount - External Media Directory. Nov 24 00:32:57.432237 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 24 00:32:57.432256 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 24 00:32:57.432277 systemd-journald[1513]: Journal started Nov 24 00:32:57.432331 systemd-journald[1513]: Runtime Journal (/run/log/journal/ec27e9ea4292c70281785eb094ec148a) is 4.7M, max 38.1M, 33.3M free. Nov 24 00:32:57.062069 systemd[1]: Queued start job for default target multi-user.target. Nov 24 00:32:57.085171 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 24 00:32:57.085630 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 24 00:32:57.438879 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 00:32:57.441558 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 24 00:32:57.445053 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:32:57.446124 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 24 00:32:57.446801 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 24 00:32:57.448546 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:32:57.449947 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:32:57.451029 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:32:57.452044 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:32:57.454452 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:32:57.454677 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:32:57.464612 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 00:32:57.468273 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 24 00:32:57.469585 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 24 00:32:57.485334 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:32:57.491878 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 00:32:57.506733 kernel: fuse: init (API version 7.41) Nov 24 00:32:57.498993 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 24 00:32:57.499666 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 24 00:32:57.499716 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 00:32:57.507235 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 24 00:32:57.511007 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 24 00:32:57.513187 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:32:57.519017 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 24 00:32:57.529109 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 24 00:32:57.529983 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 00:32:57.538873 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 24 00:32:57.539601 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 00:32:57.541905 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 00:32:57.549925 kernel: ACPI: bus type drm_connector registered Nov 24 00:32:57.549492 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 24 00:32:57.580761 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 24 00:32:57.587108 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 00:32:57.588256 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 00:32:57.590339 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 24 00:32:57.599662 systemd-journald[1513]: Time spent on flushing to /var/log/journal/ec27e9ea4292c70281785eb094ec148a is 49.401ms for 1014 entries. Nov 24 00:32:57.599662 systemd-journald[1513]: System Journal (/var/log/journal/ec27e9ea4292c70281785eb094ec148a) is 8M, max 195.6M, 187.6M free. Nov 24 00:32:57.666442 systemd-journald[1513]: Received client request to flush runtime journal. Nov 24 00:32:57.666502 kernel: loop0: detected capacity change from 0 to 72368 Nov 24 00:32:57.595114 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 24 00:32:57.596923 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 24 00:32:57.598636 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 24 00:32:57.619263 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 24 00:32:57.620330 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 24 00:32:57.632096 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 24 00:32:57.647979 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:32:57.669917 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 24 00:32:57.676826 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 24 00:32:57.685487 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 00:32:57.694932 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 24 00:32:57.749923 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:32:57.753478 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 24 00:32:57.780625 systemd-tmpfiles[1576]: ACLs are not supported, ignoring. Nov 24 00:32:57.780654 systemd-tmpfiles[1576]: ACLs are not supported, ignoring. Nov 24 00:32:57.787168 kernel: loop1: detected capacity change from 0 to 128560 Nov 24 00:32:57.790436 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:32:57.927885 kernel: loop2: detected capacity change from 0 to 224512 Nov 24 00:32:58.088238 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 24 00:32:58.092252 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 24 00:32:58.119818 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 24 00:32:58.162868 kernel: loop3: detected capacity change from 0 to 110984 Nov 24 00:32:58.278898 kernel: loop4: detected capacity change from 0 to 72368 Nov 24 00:32:58.291877 kernel: loop5: detected capacity change from 0 to 128560 Nov 24 00:32:58.319101 kernel: loop6: detected capacity change from 0 to 224512 Nov 24 00:32:58.342890 kernel: loop7: detected capacity change from 0 to 110984 Nov 24 00:32:58.356087 (sd-merge)[1588]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Nov 24 00:32:58.357172 (sd-merge)[1588]: Merged extensions into '/usr'. Nov 24 00:32:58.385346 systemd[1]: Reload requested from client PID 1561 ('systemd-sysext') (unit systemd-sysext.service)... Nov 24 00:32:58.385524 systemd[1]: Reloading... Nov 24 00:32:58.538874 zram_generator::config[1614]: No configuration found. Nov 24 00:32:58.844355 systemd[1]: Reloading finished in 458 ms. Nov 24 00:32:58.874348 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 24 00:32:58.876442 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 24 00:32:58.905334 systemd[1]: Starting ensure-sysext.service... Nov 24 00:32:58.911039 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 00:32:58.915175 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:32:58.959570 systemd-tmpfiles[1667]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 24 00:32:58.962355 systemd-tmpfiles[1667]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 24 00:32:58.962756 systemd-tmpfiles[1667]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 24 00:32:58.963092 systemd[1]: Reload requested from client PID 1666 ('systemctl') (unit ensure-sysext.service)... Nov 24 00:32:58.963107 systemd[1]: Reloading... Nov 24 00:32:58.963165 systemd-tmpfiles[1667]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 24 00:32:58.969241 systemd-tmpfiles[1667]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 24 00:32:58.971053 systemd-tmpfiles[1667]: ACLs are not supported, ignoring. Nov 24 00:32:58.971149 systemd-tmpfiles[1667]: ACLs are not supported, ignoring. Nov 24 00:32:58.979832 systemd-udevd[1668]: Using default interface naming scheme 'v255'. Nov 24 00:32:58.985710 systemd-tmpfiles[1667]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 00:32:58.985726 systemd-tmpfiles[1667]: Skipping /boot Nov 24 00:32:58.999034 systemd-tmpfiles[1667]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 00:32:58.999051 systemd-tmpfiles[1667]: Skipping /boot Nov 24 00:32:59.096958 zram_generator::config[1705]: No configuration found. Nov 24 00:32:59.329930 ldconfig[1556]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 24 00:32:59.490293 (udev-worker)[1713]: Network interface NamePolicy= disabled on kernel command line. Nov 24 00:32:59.686871 kernel: mousedev: PS/2 mouse device common for all mice Nov 24 00:32:59.737899 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Nov 24 00:32:59.775103 kernel: ACPI: button: Power Button [PWRF] Nov 24 00:32:59.789878 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Nov 24 00:32:59.805292 systemd[1]: Reloading finished in 841 ms. Nov 24 00:32:59.817012 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:32:59.820479 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 24 00:32:59.821930 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:32:59.822865 kernel: ACPI: button: Sleep Button [SLPF] Nov 24 00:32:59.857051 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 24 00:32:59.858837 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 00:32:59.865277 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 24 00:32:59.872110 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 24 00:32:59.877123 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 00:32:59.887357 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 00:32:59.893153 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 24 00:32:59.906787 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:32:59.907603 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:32:59.911227 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:32:59.914925 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:32:59.921983 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:32:59.922691 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:32:59.922973 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:32:59.923128 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:32:59.933074 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 24 00:32:59.940018 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:32:59.940339 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:32:59.940575 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:32:59.940709 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:32:59.940870 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:32:59.953867 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:32:59.954293 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:32:59.958249 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 00:32:59.960120 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:32:59.960323 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:32:59.960594 systemd[1]: Reached target time-set.target - System Time Set. Nov 24 00:32:59.962974 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:32:59.976992 systemd[1]: Finished ensure-sysext.service. Nov 24 00:33:00.017310 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:33:00.017560 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:33:00.019126 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 00:33:00.021502 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 24 00:33:00.029863 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:33:00.030437 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:33:00.031646 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 00:33:00.031925 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 00:33:00.043878 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Nov 24 00:33:00.049506 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:33:00.049791 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:33:00.050764 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 00:33:00.075707 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 24 00:33:00.080119 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 24 00:33:00.105870 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 24 00:33:00.107695 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 24 00:33:00.123517 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 24 00:33:00.148737 augenrules[1915]: No rules Nov 24 00:33:00.154087 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 00:33:00.154385 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 00:33:00.188674 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 24 00:33:00.196075 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 24 00:33:00.240001 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:33:00.286893 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 24 00:33:00.347980 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 24 00:33:00.467990 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:33:00.592818 systemd-networkd[1850]: lo: Link UP Nov 24 00:33:00.592837 systemd-networkd[1850]: lo: Gained carrier Nov 24 00:33:00.594669 systemd-networkd[1850]: Enumeration completed Nov 24 00:33:00.594818 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 00:33:00.595907 systemd-networkd[1850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:33:00.595921 systemd-networkd[1850]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 00:33:00.599185 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 24 00:33:00.602621 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 24 00:33:00.604200 systemd-networkd[1850]: eth0: Link UP Nov 24 00:33:00.604499 systemd-networkd[1850]: eth0: Gained carrier Nov 24 00:33:00.604532 systemd-networkd[1850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:33:00.614960 systemd-networkd[1850]: eth0: DHCPv4 address 172.31.20.18/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 24 00:33:00.637381 systemd-resolved[1851]: Positive Trust Anchors: Nov 24 00:33:00.637401 systemd-resolved[1851]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 00:33:00.637449 systemd-resolved[1851]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 00:33:00.639230 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 24 00:33:00.642272 systemd-resolved[1851]: Defaulting to hostname 'linux'. Nov 24 00:33:00.649526 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 00:33:00.651455 systemd[1]: Reached target network.target - Network. Nov 24 00:33:00.653063 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:33:00.653568 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 00:33:00.654302 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 24 00:33:00.657695 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 24 00:33:00.658221 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 24 00:33:00.658971 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 24 00:33:00.659630 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 24 00:33:00.660119 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 24 00:33:00.660539 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 24 00:33:00.660584 systemd[1]: Reached target paths.target - Path Units. Nov 24 00:33:00.661115 systemd[1]: Reached target timers.target - Timer Units. Nov 24 00:33:00.663724 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 24 00:33:00.669669 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 24 00:33:00.674787 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 24 00:33:00.677361 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 24 00:33:00.678321 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 24 00:33:00.702139 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 24 00:33:00.703628 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 24 00:33:00.712517 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 24 00:33:00.718645 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 00:33:00.720494 systemd[1]: Reached target basic.target - Basic System. Nov 24 00:33:00.721669 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 24 00:33:00.721748 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 24 00:33:00.724487 systemd[1]: Starting containerd.service - containerd container runtime... Nov 24 00:33:00.737592 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 24 00:33:00.748977 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 24 00:33:00.768882 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 24 00:33:00.780570 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 24 00:33:00.788607 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 24 00:33:00.789944 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 24 00:33:00.806590 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 24 00:33:00.855767 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 24 00:33:00.982628 systemd[1]: Started ntpd.service - Network Time Service. Nov 24 00:33:00.998245 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 24 00:33:01.012580 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 24 00:33:01.059322 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 24 00:33:01.072160 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 24 00:33:01.081022 jq[1951]: false Nov 24 00:33:01.094674 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 24 00:33:01.101692 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 24 00:33:01.103000 oslogin_cache_refresh[1953]: Refreshing passwd entry cache Nov 24 00:33:01.108139 google_oslogin_nss_cache[1953]: oslogin_cache_refresh[1953]: Refreshing passwd entry cache Nov 24 00:33:01.104948 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 24 00:33:01.106825 systemd[1]: Starting update-engine.service - Update Engine... Nov 24 00:33:01.123293 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 24 00:33:01.142109 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 24 00:33:01.143270 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 24 00:33:01.143569 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 24 00:33:01.158875 google_oslogin_nss_cache[1953]: oslogin_cache_refresh[1953]: Failure getting users, quitting Nov 24 00:33:01.158875 google_oslogin_nss_cache[1953]: oslogin_cache_refresh[1953]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 00:33:01.158875 google_oslogin_nss_cache[1953]: oslogin_cache_refresh[1953]: Refreshing group entry cache Nov 24 00:33:01.152521 oslogin_cache_refresh[1953]: Failure getting users, quitting Nov 24 00:33:01.152548 oslogin_cache_refresh[1953]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 00:33:01.152609 oslogin_cache_refresh[1953]: Refreshing group entry cache Nov 24 00:33:01.189289 oslogin_cache_refresh[1953]: Failure getting groups, quitting Nov 24 00:33:01.193199 google_oslogin_nss_cache[1953]: oslogin_cache_refresh[1953]: Failure getting groups, quitting Nov 24 00:33:01.193199 google_oslogin_nss_cache[1953]: oslogin_cache_refresh[1953]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 00:33:01.189307 oslogin_cache_refresh[1953]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 00:33:01.199319 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 24 00:33:01.204974 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 24 00:33:01.235794 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 24 00:33:01.240809 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 24 00:33:01.284264 extend-filesystems[1952]: Found /dev/nvme0n1p6 Nov 24 00:33:01.293520 jq[1970]: true Nov 24 00:33:01.434346 systemd[1]: motdgen.service: Deactivated successfully. Nov 24 00:33:01.437168 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 24 00:33:01.518246 jq[1990]: true Nov 24 00:33:01.518624 ntpd[1955]: 24 Nov 00:33:01 ntpd[1955]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:17:42 UTC 2025 (1): Starting Nov 24 00:33:01.518624 ntpd[1955]: 24 Nov 00:33:01 ntpd[1955]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 24 00:33:01.518624 ntpd[1955]: 24 Nov 00:33:01 ntpd[1955]: ---------------------------------------------------- Nov 24 00:33:01.518624 ntpd[1955]: 24 Nov 00:33:01 ntpd[1955]: ntp-4 is maintained by Network Time Foundation, Nov 24 00:33:01.518624 ntpd[1955]: 24 Nov 00:33:01 ntpd[1955]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 24 00:33:01.518624 ntpd[1955]: 24 Nov 00:33:01 ntpd[1955]: corporation. Support and training for ntp-4 are Nov 24 00:33:01.518624 ntpd[1955]: 24 Nov 00:33:01 ntpd[1955]: available at https://www.nwtime.org/support Nov 24 00:33:01.518624 ntpd[1955]: 24 Nov 00:33:01 ntpd[1955]: ---------------------------------------------------- Nov 24 00:33:01.517493 ntpd[1955]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:17:42 UTC 2025 (1): Starting Nov 24 00:33:01.517594 ntpd[1955]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 24 00:33:01.517606 ntpd[1955]: ---------------------------------------------------- Nov 24 00:33:01.551046 extend-filesystems[1952]: Found /dev/nvme0n1p9 Nov 24 00:33:01.627188 kernel: ntpd[1955]: segfault at 24 ip 000055ef420e7aeb sp 00007ffd9ea2f2e0 error 4 in ntpd[68aeb,55ef42085000+80000] likely on CPU 0 (core 0, socket 0) Nov 24 00:33:01.627233 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Nov 24 00:33:01.517616 ntpd[1955]: ntp-4 is maintained by Network Time Foundation, Nov 24 00:33:01.631484 ntpd[1955]: 24 Nov 00:33:01 ntpd[1955]: proto: precision = 0.095 usec (-23) Nov 24 00:33:01.631484 ntpd[1955]: 24 Nov 00:33:01 ntpd[1955]: basedate set to 2025-11-11 Nov 24 00:33:01.631484 ntpd[1955]: 24 Nov 00:33:01 ntpd[1955]: gps base set to 2025-11-16 (week 2393) Nov 24 00:33:01.631484 ntpd[1955]: 24 Nov 00:33:01 ntpd[1955]: Listen and drop on 0 v6wildcard [::]:123 Nov 24 00:33:01.631484 ntpd[1955]: 24 Nov 00:33:01 ntpd[1955]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 24 00:33:01.631484 ntpd[1955]: 24 Nov 00:33:01 ntpd[1955]: Listen normally on 2 lo 127.0.0.1:123 Nov 24 00:33:01.631484 ntpd[1955]: 24 Nov 00:33:01 ntpd[1955]: Listen normally on 3 eth0 172.31.20.18:123 Nov 24 00:33:01.631484 ntpd[1955]: 24 Nov 00:33:01 ntpd[1955]: Listen normally on 4 lo [::1]:123 Nov 24 00:33:01.631484 ntpd[1955]: 24 Nov 00:33:01 ntpd[1955]: bind(21) AF_INET6 [fe80::4e2:43ff:fe06:ca4b%2]:123 flags 0x811 failed: Cannot assign requested address Nov 24 00:33:01.631484 ntpd[1955]: 24 Nov 00:33:01 ntpd[1955]: unable to create socket on eth0 (5) for [fe80::4e2:43ff:fe06:ca4b%2]:123 Nov 24 00:33:01.558751 (ntainerd)[1991]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 24 00:33:01.696926 tar[1976]: linux-amd64/LICENSE Nov 24 00:33:01.696926 tar[1976]: linux-amd64/helm Nov 24 00:33:01.517625 ntpd[1955]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 24 00:33:01.697402 update_engine[1966]: I20251124 00:33:01.656534 1966 main.cc:92] Flatcar Update Engine starting Nov 24 00:33:01.697603 extend-filesystems[1952]: Checking size of /dev/nvme0n1p9 Nov 24 00:33:01.665297 systemd-coredump[2001]: Process 1955 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Nov 24 00:33:01.517635 ntpd[1955]: corporation. Support and training for ntp-4 are Nov 24 00:33:01.670013 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Nov 24 00:33:01.517644 ntpd[1955]: available at https://www.nwtime.org/support Nov 24 00:33:01.734109 systemd[1]: Started systemd-coredump@0-2001-0.service - Process Core Dump (PID 2001/UID 0). Nov 24 00:33:01.517654 ntpd[1955]: ---------------------------------------------------- Nov 24 00:33:01.567026 ntpd[1955]: proto: precision = 0.095 usec (-23) Nov 24 00:33:01.574506 ntpd[1955]: basedate set to 2025-11-11 Nov 24 00:33:01.574593 ntpd[1955]: gps base set to 2025-11-16 (week 2393) Nov 24 00:33:01.574754 ntpd[1955]: Listen and drop on 0 v6wildcard [::]:123 Nov 24 00:33:01.574783 ntpd[1955]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 24 00:33:01.578490 ntpd[1955]: Listen normally on 2 lo 127.0.0.1:123 Nov 24 00:33:01.578528 ntpd[1955]: Listen normally on 3 eth0 172.31.20.18:123 Nov 24 00:33:01.578568 ntpd[1955]: Listen normally on 4 lo [::1]:123 Nov 24 00:33:01.578605 ntpd[1955]: bind(21) AF_INET6 [fe80::4e2:43ff:fe06:ca4b%2]:123 flags 0x811 failed: Cannot assign requested address Nov 24 00:33:01.578626 ntpd[1955]: unable to create socket on eth0 (5) for [fe80::4e2:43ff:fe06:ca4b%2]:123 Nov 24 00:33:01.781218 dbus-daemon[1949]: [system] SELinux support is enabled Nov 24 00:33:01.794245 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 24 00:33:01.824448 coreos-metadata[1948]: Nov 24 00:33:01.781 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 24 00:33:01.824448 coreos-metadata[1948]: Nov 24 00:33:01.822 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 24 00:33:01.851167 coreos-metadata[1948]: Nov 24 00:33:01.849 INFO Fetch successful Nov 24 00:33:01.851167 coreos-metadata[1948]: Nov 24 00:33:01.849 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 24 00:33:01.833612 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 24 00:33:01.833684 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 24 00:33:01.841669 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 24 00:33:01.841699 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 24 00:33:01.881278 coreos-metadata[1948]: Nov 24 00:33:01.864 INFO Fetch successful Nov 24 00:33:01.881278 coreos-metadata[1948]: Nov 24 00:33:01.864 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 24 00:33:01.876674 dbus-daemon[1949]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1850 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 24 00:33:01.882443 coreos-metadata[1948]: Nov 24 00:33:01.882 INFO Fetch successful Nov 24 00:33:01.882443 coreos-metadata[1948]: Nov 24 00:33:01.882 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 24 00:33:01.892053 coreos-metadata[1948]: Nov 24 00:33:01.891 INFO Fetch successful Nov 24 00:33:01.892053 coreos-metadata[1948]: Nov 24 00:33:01.892 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 24 00:33:01.908236 coreos-metadata[1948]: Nov 24 00:33:01.901 INFO Fetch failed with 404: resource not found Nov 24 00:33:01.908236 coreos-metadata[1948]: Nov 24 00:33:01.901 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 24 00:33:01.910540 coreos-metadata[1948]: Nov 24 00:33:01.910 INFO Fetch successful Nov 24 00:33:01.910540 coreos-metadata[1948]: Nov 24 00:33:01.910 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 24 00:33:01.914079 dbus-daemon[1949]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 24 00:33:01.926082 coreos-metadata[1948]: Nov 24 00:33:01.921 INFO Fetch successful Nov 24 00:33:01.932254 coreos-metadata[1948]: Nov 24 00:33:01.930 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 24 00:33:01.964009 coreos-metadata[1948]: Nov 24 00:33:01.935 INFO Fetch successful Nov 24 00:33:01.964009 coreos-metadata[1948]: Nov 24 00:33:01.946 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 24 00:33:01.964009 coreos-metadata[1948]: Nov 24 00:33:01.956 INFO Fetch successful Nov 24 00:33:01.955582 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 24 00:33:01.964488 update_engine[1966]: I20251124 00:33:01.947383 1966 update_check_scheduler.cc:74] Next update check in 2m26s Nov 24 00:33:01.956483 systemd[1]: Started update-engine.service - Update Engine. Nov 24 00:33:01.987404 extend-filesystems[1952]: Resized partition /dev/nvme0n1p9 Nov 24 00:33:01.990567 coreos-metadata[1948]: Nov 24 00:33:01.978 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 24 00:33:01.990567 coreos-metadata[1948]: Nov 24 00:33:01.988 INFO Fetch successful Nov 24 00:33:02.007923 systemd-networkd[1850]: eth0: Gained IPv6LL Nov 24 00:33:02.016238 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 24 00:33:02.042145 extend-filesystems[2029]: resize2fs 1.47.3 (8-Jul-2025) Nov 24 00:33:02.032052 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 24 00:33:02.037348 systemd[1]: Reached target network-online.target - Network is Online. Nov 24 00:33:02.051529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:33:02.061708 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 24 00:33:02.107138 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Nov 24 00:33:02.107231 bash[2026]: Updated "/home/core/.ssh/authorized_keys" Nov 24 00:33:02.084210 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 24 00:33:02.109412 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 24 00:33:02.126179 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 24 00:33:02.132214 systemd[1]: Starting sshkeys.service... Nov 24 00:33:02.318731 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 24 00:33:02.322876 systemd-logind[1963]: Watching system buttons on /dev/input/event2 (Power Button) Nov 24 00:33:02.322908 systemd-logind[1963]: Watching system buttons on /dev/input/event3 (Sleep Button) Nov 24 00:33:02.322934 systemd-logind[1963]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 24 00:33:02.323877 systemd-logind[1963]: New seat seat0. Nov 24 00:33:02.333413 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 24 00:33:02.379805 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 24 00:33:02.383376 systemd[1]: Started systemd-logind.service - User Login Management. Nov 24 00:33:02.472418 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 24 00:33:02.485161 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 24 00:33:02.509872 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Nov 24 00:33:02.540688 extend-filesystems[2029]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 24 00:33:02.540688 extend-filesystems[2029]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 24 00:33:02.540688 extend-filesystems[2029]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Nov 24 00:33:02.537086 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 24 00:33:02.575058 extend-filesystems[1952]: Resized filesystem in /dev/nvme0n1p9 Nov 24 00:33:02.537411 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 24 00:33:02.597478 coreos-metadata[2076]: Nov 24 00:33:02.594 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 24 00:33:02.597478 coreos-metadata[2076]: Nov 24 00:33:02.595 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 24 00:33:02.597478 coreos-metadata[2076]: Nov 24 00:33:02.596 INFO Fetch successful Nov 24 00:33:02.597478 coreos-metadata[2076]: Nov 24 00:33:02.596 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 24 00:33:02.597478 coreos-metadata[2076]: Nov 24 00:33:02.597 INFO Fetch successful Nov 24 00:33:02.602197 unknown[2076]: wrote ssh authorized keys file for user: core Nov 24 00:33:02.694700 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 24 00:33:02.698031 dbus-daemon[1949]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 24 00:33:02.699327 dbus-daemon[1949]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2016 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 24 00:33:02.708038 systemd[1]: Starting polkit.service - Authorization Manager... Nov 24 00:33:02.711781 update-ssh-keys[2131]: Updated "/home/core/.ssh/authorized_keys" Nov 24 00:33:02.712239 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 24 00:33:02.719907 systemd[1]: Finished sshkeys.service. Nov 24 00:33:02.731146 locksmithd[2021]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 24 00:33:02.763769 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 24 00:33:02.852818 systemd-coredump[2003]: Process 1955 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1955: #0 0x000055ef420e7aeb n/a (ntpd + 0x68aeb) #1 0x000055ef42090cdf n/a (ntpd + 0x11cdf) #2 0x000055ef42091575 n/a (ntpd + 0x12575) #3 0x000055ef4208cd8a n/a (ntpd + 0xdd8a) #4 0x000055ef4208e5d3 n/a (ntpd + 0xf5d3) #5 0x000055ef42096fd1 n/a (ntpd + 0x17fd1) #6 0x000055ef42087c2d n/a (ntpd + 0x8c2d) #7 0x00007fe5bd97d16c n/a (libc.so.6 + 0x2716c) #8 0x00007fe5bd97d229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055ef42087c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Nov 24 00:33:02.856579 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Nov 24 00:33:02.856766 systemd[1]: ntpd.service: Failed with result 'core-dump'. Nov 24 00:33:02.868073 systemd[1]: systemd-coredump@0-2001-0.service: Deactivated successfully. Nov 24 00:33:02.891472 amazon-ssm-agent[2043]: Initializing new seelog logger Nov 24 00:33:02.903561 amazon-ssm-agent[2043]: New Seelog Logger Creation Complete Nov 24 00:33:02.903561 amazon-ssm-agent[2043]: 2025/11/24 00:33:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:33:02.903561 amazon-ssm-agent[2043]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:33:02.903561 amazon-ssm-agent[2043]: 2025/11/24 00:33:02 processing appconfig overrides Nov 24 00:33:02.905212 amazon-ssm-agent[2043]: 2025/11/24 00:33:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:33:02.907217 amazon-ssm-agent[2043]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:33:02.907217 amazon-ssm-agent[2043]: 2025/11/24 00:33:02 processing appconfig overrides Nov 24 00:33:02.917816 amazon-ssm-agent[2043]: 2025/11/24 00:33:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:33:02.917816 amazon-ssm-agent[2043]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:33:02.917816 amazon-ssm-agent[2043]: 2025/11/24 00:33:02 processing appconfig overrides Nov 24 00:33:02.917816 amazon-ssm-agent[2043]: 2025-11-24 00:33:02.9049 INFO Proxy environment variables: Nov 24 00:33:02.930921 amazon-ssm-agent[2043]: 2025/11/24 00:33:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:33:02.930921 amazon-ssm-agent[2043]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:33:02.930921 amazon-ssm-agent[2043]: 2025/11/24 00:33:02 processing appconfig overrides Nov 24 00:33:03.020301 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Nov 24 00:33:03.022865 amazon-ssm-agent[2043]: 2025-11-24 00:33:02.9050 INFO https_proxy: Nov 24 00:33:03.028631 systemd[1]: Started ntpd.service - Network Time Service. Nov 24 00:33:03.122905 amazon-ssm-agent[2043]: 2025-11-24 00:33:02.9050 INFO http_proxy: Nov 24 00:33:03.167838 ntpd[2180]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:17:42 UTC 2025 (1): Starting Nov 24 00:33:03.167931 ntpd[2180]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 24 00:33:03.168389 ntpd[2180]: 24 Nov 00:33:03 ntpd[2180]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:17:42 UTC 2025 (1): Starting Nov 24 00:33:03.168389 ntpd[2180]: 24 Nov 00:33:03 ntpd[2180]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 24 00:33:03.168389 ntpd[2180]: 24 Nov 00:33:03 ntpd[2180]: ---------------------------------------------------- Nov 24 00:33:03.168389 ntpd[2180]: 24 Nov 00:33:03 ntpd[2180]: ntp-4 is maintained by Network Time Foundation, Nov 24 00:33:03.168389 ntpd[2180]: 24 Nov 00:33:03 ntpd[2180]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 24 00:33:03.168389 ntpd[2180]: 24 Nov 00:33:03 ntpd[2180]: corporation. Support and training for ntp-4 are Nov 24 00:33:03.168389 ntpd[2180]: 24 Nov 00:33:03 ntpd[2180]: available at https://www.nwtime.org/support Nov 24 00:33:03.168389 ntpd[2180]: 24 Nov 00:33:03 ntpd[2180]: ---------------------------------------------------- Nov 24 00:33:03.167941 ntpd[2180]: ---------------------------------------------------- Nov 24 00:33:03.171123 ntpd[2180]: 24 Nov 00:33:03 ntpd[2180]: proto: precision = 0.081 usec (-23) Nov 24 00:33:03.167950 ntpd[2180]: ntp-4 is maintained by Network Time Foundation, Nov 24 00:33:03.167958 ntpd[2180]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 24 00:33:03.167967 ntpd[2180]: corporation. Support and training for ntp-4 are Nov 24 00:33:03.167975 ntpd[2180]: available at https://www.nwtime.org/support Nov 24 00:33:03.167984 ntpd[2180]: ---------------------------------------------------- Nov 24 00:33:03.168798 ntpd[2180]: proto: precision = 0.081 usec (-23) Nov 24 00:33:03.176120 ntpd[2180]: basedate set to 2025-11-11 Nov 24 00:33:03.176998 ntpd[2180]: 24 Nov 00:33:03 ntpd[2180]: basedate set to 2025-11-11 Nov 24 00:33:03.176998 ntpd[2180]: 24 Nov 00:33:03 ntpd[2180]: gps base set to 2025-11-16 (week 2393) Nov 24 00:33:03.176998 ntpd[2180]: 24 Nov 00:33:03 ntpd[2180]: Listen and drop on 0 v6wildcard [::]:123 Nov 24 00:33:03.176998 ntpd[2180]: 24 Nov 00:33:03 ntpd[2180]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 24 00:33:03.176998 ntpd[2180]: 24 Nov 00:33:03 ntpd[2180]: Listen normally on 2 lo 127.0.0.1:123 Nov 24 00:33:03.176998 ntpd[2180]: 24 Nov 00:33:03 ntpd[2180]: Listen normally on 3 eth0 172.31.20.18:123 Nov 24 00:33:03.176998 ntpd[2180]: 24 Nov 00:33:03 ntpd[2180]: Listen normally on 4 lo [::1]:123 Nov 24 00:33:03.176998 ntpd[2180]: 24 Nov 00:33:03 ntpd[2180]: Listen normally on 5 eth0 [fe80::4e2:43ff:fe06:ca4b%2]:123 Nov 24 00:33:03.176998 ntpd[2180]: 24 Nov 00:33:03 ntpd[2180]: Listening on routing socket on fd #22 for interface updates Nov 24 00:33:03.176145 ntpd[2180]: gps base set to 2025-11-16 (week 2393) Nov 24 00:33:03.176257 ntpd[2180]: Listen and drop on 0 v6wildcard [::]:123 Nov 24 00:33:03.176283 ntpd[2180]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 24 00:33:03.176492 ntpd[2180]: Listen normally on 2 lo 127.0.0.1:123 Nov 24 00:33:03.176519 ntpd[2180]: Listen normally on 3 eth0 172.31.20.18:123 Nov 24 00:33:03.176547 ntpd[2180]: Listen normally on 4 lo [::1]:123 Nov 24 00:33:03.176572 ntpd[2180]: Listen normally on 5 eth0 [fe80::4e2:43ff:fe06:ca4b%2]:123 Nov 24 00:33:03.176597 ntpd[2180]: Listening on routing socket on fd #22 for interface updates Nov 24 00:33:03.191226 ntpd[2180]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 24 00:33:03.193997 ntpd[2180]: 24 Nov 00:33:03 ntpd[2180]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 24 00:33:03.193997 ntpd[2180]: 24 Nov 00:33:03 ntpd[2180]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 24 00:33:03.191271 ntpd[2180]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 24 00:33:03.223036 polkitd[2142]: Started polkitd version 126 Nov 24 00:33:03.225240 amazon-ssm-agent[2043]: 2025-11-24 00:33:02.9050 INFO no_proxy: Nov 24 00:33:03.248049 containerd[1991]: time="2025-11-24T00:33:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 24 00:33:03.251416 containerd[1991]: time="2025-11-24T00:33:03.251365360Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 24 00:33:03.260238 polkitd[2142]: Loading rules from directory /etc/polkit-1/rules.d Nov 24 00:33:03.260789 polkitd[2142]: Loading rules from directory /run/polkit-1/rules.d Nov 24 00:33:03.260876 polkitd[2142]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 24 00:33:03.261330 polkitd[2142]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 24 00:33:03.261368 polkitd[2142]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 24 00:33:03.261424 polkitd[2142]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 24 00:33:03.271768 polkitd[2142]: Finished loading, compiling and executing 2 rules Nov 24 00:33:03.272944 systemd[1]: Started polkit.service - Authorization Manager. Nov 24 00:33:03.276398 dbus-daemon[1949]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 24 00:33:03.277908 polkitd[2142]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 24 00:33:03.311766 systemd-hostnamed[2016]: Hostname set to (transient) Nov 24 00:33:03.314026 systemd-resolved[1851]: System hostname changed to 'ip-172-31-20-18'. Nov 24 00:33:03.325197 amazon-ssm-agent[2043]: 2025-11-24 00:33:02.9063 INFO Checking if agent identity type OnPrem can be assumed Nov 24 00:33:03.325816 containerd[1991]: time="2025-11-24T00:33:03.325707215Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.748µs" Nov 24 00:33:03.325816 containerd[1991]: time="2025-11-24T00:33:03.325747820Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 24 00:33:03.325816 containerd[1991]: time="2025-11-24T00:33:03.325774320Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 24 00:33:03.326237 containerd[1991]: time="2025-11-24T00:33:03.326005737Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 24 00:33:03.326237 containerd[1991]: time="2025-11-24T00:33:03.326036533Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 24 00:33:03.326237 containerd[1991]: time="2025-11-24T00:33:03.326069736Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 00:33:03.326237 containerd[1991]: time="2025-11-24T00:33:03.326139414Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 00:33:03.326237 containerd[1991]: time="2025-11-24T00:33:03.326155293Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 00:33:03.326596 containerd[1991]: time="2025-11-24T00:33:03.326477073Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 00:33:03.326596 containerd[1991]: time="2025-11-24T00:33:03.326503812Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 00:33:03.326596 containerd[1991]: time="2025-11-24T00:33:03.326529438Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 00:33:03.326596 containerd[1991]: time="2025-11-24T00:33:03.326541910Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 24 00:33:03.326759 containerd[1991]: time="2025-11-24T00:33:03.326645576Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 24 00:33:03.332070 containerd[1991]: time="2025-11-24T00:33:03.331976635Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 00:33:03.332070 containerd[1991]: time="2025-11-24T00:33:03.332057869Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 00:33:03.332070 containerd[1991]: time="2025-11-24T00:33:03.332075429Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 24 00:33:03.332266 containerd[1991]: time="2025-11-24T00:33:03.332137922Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 24 00:33:03.332586 containerd[1991]: time="2025-11-24T00:33:03.332563838Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 24 00:33:03.332704 containerd[1991]: time="2025-11-24T00:33:03.332679618Z" level=info msg="metadata content store policy set" policy=shared Nov 24 00:33:03.347077 containerd[1991]: time="2025-11-24T00:33:03.347024986Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 24 00:33:03.347345 containerd[1991]: time="2025-11-24T00:33:03.347318904Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 24 00:33:03.347404 containerd[1991]: time="2025-11-24T00:33:03.347357483Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 24 00:33:03.347404 containerd[1991]: time="2025-11-24T00:33:03.347377277Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 24 00:33:03.347404 containerd[1991]: time="2025-11-24T00:33:03.347395079Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 24 00:33:03.347505 containerd[1991]: time="2025-11-24T00:33:03.347414846Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 24 00:33:03.347505 containerd[1991]: time="2025-11-24T00:33:03.347432897Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 24 00:33:03.347505 containerd[1991]: time="2025-11-24T00:33:03.347451048Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 24 00:33:03.347505 containerd[1991]: time="2025-11-24T00:33:03.347469099Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 24 00:33:03.347505 containerd[1991]: time="2025-11-24T00:33:03.347484155Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 24 00:33:03.347661 containerd[1991]: time="2025-11-24T00:33:03.347504034Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 24 00:33:03.347661 containerd[1991]: time="2025-11-24T00:33:03.347526819Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 24 00:33:03.347728 containerd[1991]: time="2025-11-24T00:33:03.347691010Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 24 00:33:03.347728 containerd[1991]: time="2025-11-24T00:33:03.347716182Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 24 00:33:03.347801 containerd[1991]: time="2025-11-24T00:33:03.347739330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 24 00:33:03.347801 containerd[1991]: time="2025-11-24T00:33:03.347755578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 24 00:33:03.347801 containerd[1991]: time="2025-11-24T00:33:03.347770401Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 24 00:33:03.347801 containerd[1991]: time="2025-11-24T00:33:03.347786331Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 24 00:33:03.347965 containerd[1991]: time="2025-11-24T00:33:03.347803723Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 24 00:33:03.347965 containerd[1991]: time="2025-11-24T00:33:03.347819894Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 24 00:33:03.347965 containerd[1991]: time="2025-11-24T00:33:03.347837447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 24 00:33:03.347965 containerd[1991]: time="2025-11-24T00:33:03.347873599Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 24 00:33:03.347965 containerd[1991]: time="2025-11-24T00:33:03.347889990Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 24 00:33:03.347965 containerd[1991]: time="2025-11-24T00:33:03.347951464Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 24 00:33:03.348167 containerd[1991]: time="2025-11-24T00:33:03.347970107Z" level=info msg="Start snapshots syncer" Nov 24 00:33:03.348167 containerd[1991]: time="2025-11-24T00:33:03.347992860Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 24 00:33:03.348409 containerd[1991]: time="2025-11-24T00:33:03.348359550Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 24 00:33:03.348539 containerd[1991]: time="2025-11-24T00:33:03.348436748Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 24 00:33:03.349472 sshd_keygen[1995]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 24 00:33:03.356887 containerd[1991]: time="2025-11-24T00:33:03.354042591Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 24 00:33:03.356887 containerd[1991]: time="2025-11-24T00:33:03.354367319Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 24 00:33:03.356887 containerd[1991]: time="2025-11-24T00:33:03.354420756Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 24 00:33:03.356887 containerd[1991]: time="2025-11-24T00:33:03.354437934Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 24 00:33:03.356887 containerd[1991]: time="2025-11-24T00:33:03.354471659Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 24 00:33:03.356887 containerd[1991]: time="2025-11-24T00:33:03.354490794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 24 00:33:03.356887 containerd[1991]: time="2025-11-24T00:33:03.354505961Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 24 00:33:03.356887 containerd[1991]: time="2025-11-24T00:33:03.354740785Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 24 00:33:03.356887 containerd[1991]: time="2025-11-24T00:33:03.354788531Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 24 00:33:03.356887 containerd[1991]: time="2025-11-24T00:33:03.354821589Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 24 00:33:03.356887 containerd[1991]: time="2025-11-24T00:33:03.354892402Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 24 00:33:03.356887 containerd[1991]: time="2025-11-24T00:33:03.355010059Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 00:33:03.356887 containerd[1991]: time="2025-11-24T00:33:03.355037888Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 00:33:03.356887 containerd[1991]: time="2025-11-24T00:33:03.355051435Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 00:33:03.357441 containerd[1991]: time="2025-11-24T00:33:03.355065039Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 00:33:03.357441 containerd[1991]: time="2025-11-24T00:33:03.355095611Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 24 00:33:03.357441 containerd[1991]: time="2025-11-24T00:33:03.355118842Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 24 00:33:03.357441 containerd[1991]: time="2025-11-24T00:33:03.355143792Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 24 00:33:03.357441 containerd[1991]: time="2025-11-24T00:33:03.355182492Z" level=info msg="runtime interface created" Nov 24 00:33:03.357441 containerd[1991]: time="2025-11-24T00:33:03.355193452Z" level=info msg="created NRI interface" Nov 24 00:33:03.357441 containerd[1991]: time="2025-11-24T00:33:03.355210769Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 24 00:33:03.357441 containerd[1991]: time="2025-11-24T00:33:03.355245939Z" level=info msg="Connect containerd service" Nov 24 00:33:03.357441 containerd[1991]: time="2025-11-24T00:33:03.355674899Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 24 00:33:03.361986 containerd[1991]: time="2025-11-24T00:33:03.361834843Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 24 00:33:03.425884 amazon-ssm-agent[2043]: 2025-11-24 00:33:02.9065 INFO Checking if agent identity type EC2 can be assumed Nov 24 00:33:03.456056 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 24 00:33:03.467223 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 24 00:33:03.472384 systemd[1]: Started sshd@0-172.31.20.18:22-139.178.89.65:58744.service - OpenSSH per-connection server daemon (139.178.89.65:58744). Nov 24 00:33:03.490765 systemd[1]: issuegen.service: Deactivated successfully. Nov 24 00:33:03.491044 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 24 00:33:03.497978 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 24 00:33:03.531437 amazon-ssm-agent[2043]: 2025-11-24 00:33:03.2538 INFO Agent will take identity from EC2 Nov 24 00:33:03.560229 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 24 00:33:03.564332 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 24 00:33:03.568186 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 24 00:33:03.570461 systemd[1]: Reached target getty.target - Login Prompts. Nov 24 00:33:03.616998 amazon-ssm-agent[2043]: 2025/11/24 00:33:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:33:03.616998 amazon-ssm-agent[2043]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:33:03.617154 amazon-ssm-agent[2043]: 2025/11/24 00:33:03 processing appconfig overrides Nov 24 00:33:03.629967 amazon-ssm-agent[2043]: 2025-11-24 00:33:03.2621 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Nov 24 00:33:03.660277 amazon-ssm-agent[2043]: 2025-11-24 00:33:03.2622 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Nov 24 00:33:03.660277 amazon-ssm-agent[2043]: 2025-11-24 00:33:03.2622 INFO [amazon-ssm-agent] Starting Core Agent Nov 24 00:33:03.660277 amazon-ssm-agent[2043]: 2025-11-24 00:33:03.2622 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Nov 24 00:33:03.660277 amazon-ssm-agent[2043]: 2025-11-24 00:33:03.2622 INFO [Registrar] Starting registrar module Nov 24 00:33:03.660277 amazon-ssm-agent[2043]: 2025-11-24 00:33:03.2714 INFO [EC2Identity] Checking disk for registration info Nov 24 00:33:03.660277 amazon-ssm-agent[2043]: 2025-11-24 00:33:03.2714 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Nov 24 00:33:03.660277 amazon-ssm-agent[2043]: 2025-11-24 00:33:03.2715 INFO [EC2Identity] Generating registration keypair Nov 24 00:33:03.660277 amazon-ssm-agent[2043]: 2025-11-24 00:33:03.5532 INFO [EC2Identity] Checking write access before registering Nov 24 00:33:03.660277 amazon-ssm-agent[2043]: 2025-11-24 00:33:03.5537 INFO [EC2Identity] Registering EC2 instance with Systems Manager Nov 24 00:33:03.660277 amazon-ssm-agent[2043]: 2025-11-24 00:33:03.6166 INFO [EC2Identity] EC2 registration was successful. Nov 24 00:33:03.660277 amazon-ssm-agent[2043]: 2025-11-24 00:33:03.6167 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Nov 24 00:33:03.660277 amazon-ssm-agent[2043]: 2025-11-24 00:33:03.6168 INFO [CredentialRefresher] credentialRefresher has started Nov 24 00:33:03.660277 amazon-ssm-agent[2043]: 2025-11-24 00:33:03.6169 INFO [CredentialRefresher] Starting credentials refresher loop Nov 24 00:33:03.660277 amazon-ssm-agent[2043]: 2025-11-24 00:33:03.6579 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 24 00:33:03.660277 amazon-ssm-agent[2043]: 2025-11-24 00:33:03.6582 INFO [CredentialRefresher] Credentials ready Nov 24 00:33:03.694997 containerd[1991]: time="2025-11-24T00:33:03.694267189Z" level=info msg="Start subscribing containerd event" Nov 24 00:33:03.694997 containerd[1991]: time="2025-11-24T00:33:03.694348965Z" level=info msg="Start recovering state" Nov 24 00:33:03.694997 containerd[1991]: time="2025-11-24T00:33:03.694456487Z" level=info msg="Start event monitor" Nov 24 00:33:03.694997 containerd[1991]: time="2025-11-24T00:33:03.694476373Z" level=info msg="Start cni network conf syncer for default" Nov 24 00:33:03.694997 containerd[1991]: time="2025-11-24T00:33:03.694506200Z" level=info msg="Start streaming server" Nov 24 00:33:03.694997 containerd[1991]: time="2025-11-24T00:33:03.694519625Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 24 00:33:03.694997 containerd[1991]: time="2025-11-24T00:33:03.694529753Z" level=info msg="runtime interface starting up..." Nov 24 00:33:03.694997 containerd[1991]: time="2025-11-24T00:33:03.694540057Z" level=info msg="starting plugins..." Nov 24 00:33:03.694997 containerd[1991]: time="2025-11-24T00:33:03.694560169Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 24 00:33:03.694997 containerd[1991]: time="2025-11-24T00:33:03.694648573Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 24 00:33:03.694997 containerd[1991]: time="2025-11-24T00:33:03.694703615Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 24 00:33:03.694997 containerd[1991]: time="2025-11-24T00:33:03.694765174Z" level=info msg="containerd successfully booted in 0.449836s" Nov 24 00:33:03.694998 systemd[1]: Started containerd.service - containerd container runtime. Nov 24 00:33:03.729011 amazon-ssm-agent[2043]: 2025-11-24 00:33:03.6601 INFO [CredentialRefresher] Next credential rotation will be in 29.999963654316666 minutes Nov 24 00:33:03.774025 sshd[2209]: Accepted publickey for core from 139.178.89.65 port 58744 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:33:03.780182 sshd-session[2209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:33:03.793177 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 24 00:33:03.796226 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 24 00:33:03.797884 tar[1976]: linux-amd64/README.md Nov 24 00:33:03.826760 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 24 00:33:03.826910 systemd-logind[1963]: New session 1 of user core. Nov 24 00:33:03.838643 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 24 00:33:03.844143 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 24 00:33:03.884863 (systemd)[2231]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 24 00:33:03.892193 systemd-logind[1963]: New session c1 of user core. Nov 24 00:33:04.132662 systemd[2231]: Queued start job for default target default.target. Nov 24 00:33:04.138760 systemd[2231]: Created slice app.slice - User Application Slice. Nov 24 00:33:04.138808 systemd[2231]: Reached target paths.target - Paths. Nov 24 00:33:04.138896 systemd[2231]: Reached target timers.target - Timers. Nov 24 00:33:04.141581 systemd[2231]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 24 00:33:04.156836 systemd[2231]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 24 00:33:04.157006 systemd[2231]: Reached target sockets.target - Sockets. Nov 24 00:33:04.157079 systemd[2231]: Reached target basic.target - Basic System. Nov 24 00:33:04.157133 systemd[2231]: Reached target default.target - Main User Target. Nov 24 00:33:04.157174 systemd[2231]: Startup finished in 250ms. Nov 24 00:33:04.157335 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 24 00:33:04.165258 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 24 00:33:04.326610 systemd[1]: Started sshd@1-172.31.20.18:22-139.178.89.65:58754.service - OpenSSH per-connection server daemon (139.178.89.65:58754). Nov 24 00:33:04.540245 sshd[2242]: Accepted publickey for core from 139.178.89.65 port 58754 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:33:04.543490 sshd-session[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:33:04.549648 systemd-logind[1963]: New session 2 of user core. Nov 24 00:33:04.558913 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 24 00:33:04.680810 amazon-ssm-agent[2043]: 2025-11-24 00:33:04.6803 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 24 00:33:04.689076 sshd[2245]: Connection closed by 139.178.89.65 port 58754 Nov 24 00:33:04.691131 sshd-session[2242]: pam_unix(sshd:session): session closed for user core Nov 24 00:33:04.700255 systemd[1]: sshd@1-172.31.20.18:22-139.178.89.65:58754.service: Deactivated successfully. Nov 24 00:33:04.700581 systemd-logind[1963]: Session 2 logged out. Waiting for processes to exit. Nov 24 00:33:04.703759 systemd[1]: session-2.scope: Deactivated successfully. Nov 24 00:33:04.707836 systemd-logind[1963]: Removed session 2. Nov 24 00:33:04.732830 systemd[1]: Started sshd@2-172.31.20.18:22-139.178.89.65:58770.service - OpenSSH per-connection server daemon (139.178.89.65:58770). Nov 24 00:33:04.781252 amazon-ssm-agent[2043]: 2025-11-24 00:33:04.6828 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2249) started Nov 24 00:33:04.881806 amazon-ssm-agent[2043]: 2025-11-24 00:33:04.6828 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 24 00:33:04.927070 sshd[2254]: Accepted publickey for core from 139.178.89.65 port 58770 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:33:04.929936 sshd-session[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:33:04.940181 systemd-logind[1963]: New session 3 of user core. Nov 24 00:33:04.945127 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 24 00:33:05.070319 sshd[2267]: Connection closed by 139.178.89.65 port 58770 Nov 24 00:33:05.071020 sshd-session[2254]: pam_unix(sshd:session): session closed for user core Nov 24 00:33:05.088936 systemd[1]: sshd@2-172.31.20.18:22-139.178.89.65:58770.service: Deactivated successfully. Nov 24 00:33:05.091124 systemd[1]: session-3.scope: Deactivated successfully. Nov 24 00:33:05.092068 systemd-logind[1963]: Session 3 logged out. Waiting for processes to exit. Nov 24 00:33:05.093900 systemd-logind[1963]: Removed session 3. Nov 24 00:33:05.550641 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:33:05.552121 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 24 00:33:05.555557 systemd[1]: Startup finished in 2.775s (kernel) + 7.369s (initrd) + 9.605s (userspace) = 19.749s. Nov 24 00:33:05.567379 (kubelet)[2277]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:33:06.689534 kubelet[2277]: E1124 00:33:06.689447 2277 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:33:06.692222 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:33:06.692502 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:33:06.692834 systemd[1]: kubelet.service: Consumed 1.100s CPU time, 265.5M memory peak. Nov 24 00:33:11.808601 systemd-resolved[1851]: Clock change detected. Flushing caches. Nov 24 00:33:16.746549 systemd[1]: Started sshd@3-172.31.20.18:22-139.178.89.65:51434.service - OpenSSH per-connection server daemon (139.178.89.65:51434). Nov 24 00:33:16.946287 sshd[2289]: Accepted publickey for core from 139.178.89.65 port 51434 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:33:16.947872 sshd-session[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:33:16.953159 systemd-logind[1963]: New session 4 of user core. Nov 24 00:33:16.960696 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 24 00:33:17.082905 sshd[2292]: Connection closed by 139.178.89.65 port 51434 Nov 24 00:33:17.083910 sshd-session[2289]: pam_unix(sshd:session): session closed for user core Nov 24 00:33:17.090425 systemd[1]: sshd@3-172.31.20.18:22-139.178.89.65:51434.service: Deactivated successfully. Nov 24 00:33:17.092535 systemd[1]: session-4.scope: Deactivated successfully. Nov 24 00:33:17.093709 systemd-logind[1963]: Session 4 logged out. Waiting for processes to exit. Nov 24 00:33:17.095775 systemd-logind[1963]: Removed session 4. Nov 24 00:33:17.119778 systemd[1]: Started sshd@4-172.31.20.18:22-139.178.89.65:51440.service - OpenSSH per-connection server daemon (139.178.89.65:51440). Nov 24 00:33:17.311566 sshd[2298]: Accepted publickey for core from 139.178.89.65 port 51440 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:33:17.313159 sshd-session[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:33:17.322179 systemd-logind[1963]: New session 5 of user core. Nov 24 00:33:17.327722 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 24 00:33:17.442082 sshd[2301]: Connection closed by 139.178.89.65 port 51440 Nov 24 00:33:17.443326 sshd-session[2298]: pam_unix(sshd:session): session closed for user core Nov 24 00:33:17.448955 systemd-logind[1963]: Session 5 logged out. Waiting for processes to exit. Nov 24 00:33:17.449805 systemd[1]: sshd@4-172.31.20.18:22-139.178.89.65:51440.service: Deactivated successfully. Nov 24 00:33:17.452145 systemd[1]: session-5.scope: Deactivated successfully. Nov 24 00:33:17.454166 systemd-logind[1963]: Removed session 5. Nov 24 00:33:17.478766 systemd[1]: Started sshd@5-172.31.20.18:22-139.178.89.65:51448.service - OpenSSH per-connection server daemon (139.178.89.65:51448). Nov 24 00:33:17.659691 sshd[2307]: Accepted publickey for core from 139.178.89.65 port 51448 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:33:17.661311 sshd-session[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:33:17.667473 systemd-logind[1963]: New session 6 of user core. Nov 24 00:33:17.673708 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 24 00:33:17.794028 sshd[2310]: Connection closed by 139.178.89.65 port 51448 Nov 24 00:33:17.794731 sshd-session[2307]: pam_unix(sshd:session): session closed for user core Nov 24 00:33:17.799970 systemd[1]: sshd@5-172.31.20.18:22-139.178.89.65:51448.service: Deactivated successfully. Nov 24 00:33:17.802658 systemd[1]: session-6.scope: Deactivated successfully. Nov 24 00:33:17.803810 systemd-logind[1963]: Session 6 logged out. Waiting for processes to exit. Nov 24 00:33:17.805590 systemd-logind[1963]: Removed session 6. Nov 24 00:33:17.830005 systemd[1]: Started sshd@6-172.31.20.18:22-139.178.89.65:51460.service - OpenSSH per-connection server daemon (139.178.89.65:51460). Nov 24 00:33:18.002975 sshd[2316]: Accepted publickey for core from 139.178.89.65 port 51460 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:33:18.004482 sshd-session[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:33:18.011997 systemd-logind[1963]: New session 7 of user core. Nov 24 00:33:18.017756 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 24 00:33:18.133841 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 24 00:33:18.134112 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:33:18.149216 sudo[2320]: pam_unix(sudo:session): session closed for user root Nov 24 00:33:18.172399 sshd[2319]: Connection closed by 139.178.89.65 port 51460 Nov 24 00:33:18.173500 sshd-session[2316]: pam_unix(sshd:session): session closed for user core Nov 24 00:33:18.178326 systemd[1]: sshd@6-172.31.20.18:22-139.178.89.65:51460.service: Deactivated successfully. Nov 24 00:33:18.178330 systemd-logind[1963]: Session 7 logged out. Waiting for processes to exit. Nov 24 00:33:18.180235 systemd[1]: session-7.scope: Deactivated successfully. Nov 24 00:33:18.182301 systemd-logind[1963]: Removed session 7. Nov 24 00:33:18.207985 systemd[1]: Started sshd@7-172.31.20.18:22-139.178.89.65:51468.service - OpenSSH per-connection server daemon (139.178.89.65:51468). Nov 24 00:33:18.357990 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 24 00:33:18.360671 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:33:18.391748 sshd[2326]: Accepted publickey for core from 139.178.89.65 port 51468 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:33:18.393775 sshd-session[2326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:33:18.402914 systemd-logind[1963]: New session 8 of user core. Nov 24 00:33:18.408036 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 24 00:33:18.507471 sudo[2334]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 24 00:33:18.507747 sudo[2334]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:33:18.519695 sudo[2334]: pam_unix(sudo:session): session closed for user root Nov 24 00:33:18.527686 sudo[2333]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 24 00:33:18.528060 sudo[2333]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:33:18.543565 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 00:33:18.600597 augenrules[2360]: No rules Nov 24 00:33:18.603224 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 00:33:18.603808 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 00:33:18.605023 sudo[2333]: pam_unix(sudo:session): session closed for user root Nov 24 00:33:18.607003 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:33:18.626124 (kubelet)[2365]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:33:18.629481 sshd[2332]: Connection closed by 139.178.89.65 port 51468 Nov 24 00:33:18.629992 sshd-session[2326]: pam_unix(sshd:session): session closed for user core Nov 24 00:33:18.636017 systemd[1]: sshd@7-172.31.20.18:22-139.178.89.65:51468.service: Deactivated successfully. Nov 24 00:33:18.639162 systemd[1]: session-8.scope: Deactivated successfully. Nov 24 00:33:18.643390 systemd-logind[1963]: Session 8 logged out. Waiting for processes to exit. Nov 24 00:33:18.646096 systemd-logind[1963]: Removed session 8. Nov 24 00:33:18.663482 systemd[1]: Started sshd@8-172.31.20.18:22-139.178.89.65:51484.service - OpenSSH per-connection server daemon (139.178.89.65:51484). Nov 24 00:33:18.693782 kubelet[2365]: E1124 00:33:18.693719 2365 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:33:18.699179 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:33:18.700017 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:33:18.700780 systemd[1]: kubelet.service: Consumed 207ms CPU time, 109M memory peak. Nov 24 00:33:18.850131 sshd[2376]: Accepted publickey for core from 139.178.89.65 port 51484 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:33:18.851649 sshd-session[2376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:33:18.856906 systemd-logind[1963]: New session 9 of user core. Nov 24 00:33:18.863777 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 24 00:33:18.968630 sudo[2382]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 24 00:33:18.969021 sudo[2382]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:33:19.506216 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 24 00:33:19.536077 (dockerd)[2401]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 24 00:33:19.866125 dockerd[2401]: time="2025-11-24T00:33:19.865990068Z" level=info msg="Starting up" Nov 24 00:33:19.870361 dockerd[2401]: time="2025-11-24T00:33:19.870320903Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 24 00:33:19.883047 dockerd[2401]: time="2025-11-24T00:33:19.882995907Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 24 00:33:19.956731 dockerd[2401]: time="2025-11-24T00:33:19.956430749Z" level=info msg="Loading containers: start." Nov 24 00:33:19.972544 kernel: Initializing XFRM netlink socket Nov 24 00:33:20.330690 (udev-worker)[2421]: Network interface NamePolicy= disabled on kernel command line. Nov 24 00:33:20.384442 systemd-networkd[1850]: docker0: Link UP Nov 24 00:33:20.400224 dockerd[2401]: time="2025-11-24T00:33:20.400166083Z" level=info msg="Loading containers: done." Nov 24 00:33:20.427473 dockerd[2401]: time="2025-11-24T00:33:20.427395363Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 24 00:33:20.427690 dockerd[2401]: time="2025-11-24T00:33:20.427539044Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 24 00:33:20.427690 dockerd[2401]: time="2025-11-24T00:33:20.427655969Z" level=info msg="Initializing buildkit" Nov 24 00:33:20.470763 dockerd[2401]: time="2025-11-24T00:33:20.470631700Z" level=info msg="Completed buildkit initialization" Nov 24 00:33:20.478076 dockerd[2401]: time="2025-11-24T00:33:20.477976545Z" level=info msg="Daemon has completed initialization" Nov 24 00:33:20.478642 dockerd[2401]: time="2025-11-24T00:33:20.478178766Z" level=info msg="API listen on /run/docker.sock" Nov 24 00:33:20.478263 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 24 00:33:21.913683 containerd[1991]: time="2025-11-24T00:33:21.913636859Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Nov 24 00:33:22.566128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2102051430.mount: Deactivated successfully. Nov 24 00:33:24.177434 containerd[1991]: time="2025-11-24T00:33:24.177372198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:33:24.178933 containerd[1991]: time="2025-11-24T00:33:24.178890291Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=29072183" Nov 24 00:33:24.181078 containerd[1991]: time="2025-11-24T00:33:24.180548883Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:33:24.184938 containerd[1991]: time="2025-11-24T00:33:24.184881505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:33:24.186224 containerd[1991]: time="2025-11-24T00:33:24.186181261Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 2.272495588s" Nov 24 00:33:24.186395 containerd[1991]: time="2025-11-24T00:33:24.186371484Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Nov 24 00:33:24.187467 containerd[1991]: time="2025-11-24T00:33:24.187279655Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Nov 24 00:33:25.985889 containerd[1991]: time="2025-11-24T00:33:25.985823888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:33:25.987142 containerd[1991]: time="2025-11-24T00:33:25.987098143Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24992010" Nov 24 00:33:25.989959 containerd[1991]: time="2025-11-24T00:33:25.989915799Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:33:25.993882 containerd[1991]: time="2025-11-24T00:33:25.992847369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:33:25.993882 containerd[1991]: time="2025-11-24T00:33:25.993712089Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 1.806394378s" Nov 24 00:33:25.993882 containerd[1991]: time="2025-11-24T00:33:25.993753282Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Nov 24 00:33:25.996834 containerd[1991]: time="2025-11-24T00:33:25.996786331Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Nov 24 00:33:27.457670 containerd[1991]: time="2025-11-24T00:33:27.457614027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:33:27.460033 containerd[1991]: time="2025-11-24T00:33:27.459796840Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19404248" Nov 24 00:33:27.465534 containerd[1991]: time="2025-11-24T00:33:27.465487875Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:33:27.470246 containerd[1991]: time="2025-11-24T00:33:27.470192346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:33:27.472485 containerd[1991]: time="2025-11-24T00:33:27.471592097Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 1.474760236s" Nov 24 00:33:27.472485 containerd[1991]: time="2025-11-24T00:33:27.471642005Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Nov 24 00:33:27.472844 containerd[1991]: time="2025-11-24T00:33:27.472811821Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Nov 24 00:33:28.572683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount640349661.mount: Deactivated successfully. Nov 24 00:33:28.726700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 24 00:33:28.729635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:33:28.996151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:33:29.012090 (kubelet)[2693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:33:29.098684 kubelet[2693]: E1124 00:33:29.098634 2693 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:33:29.101939 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:33:29.102410 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:33:29.103560 systemd[1]: kubelet.service: Consumed 223ms CPU time, 110.6M memory peak. Nov 24 00:33:29.331821 containerd[1991]: time="2025-11-24T00:33:29.331521672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:33:29.333751 containerd[1991]: time="2025-11-24T00:33:29.333707647Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31161423" Nov 24 00:33:29.336395 containerd[1991]: time="2025-11-24T00:33:29.335972843Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:33:29.339670 containerd[1991]: time="2025-11-24T00:33:29.339622596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:33:29.340652 containerd[1991]: time="2025-11-24T00:33:29.340609114Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 1.867756468s" Nov 24 00:33:29.340652 containerd[1991]: time="2025-11-24T00:33:29.340656661Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Nov 24 00:33:29.341616 containerd[1991]: time="2025-11-24T00:33:29.341586726Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 24 00:33:29.968104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount282739734.mount: Deactivated successfully. Nov 24 00:33:31.249910 containerd[1991]: time="2025-11-24T00:33:31.249837037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:33:31.252463 containerd[1991]: time="2025-11-24T00:33:31.252313973Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 24 00:33:31.255117 containerd[1991]: time="2025-11-24T00:33:31.254570854Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:33:31.259172 containerd[1991]: time="2025-11-24T00:33:31.259123701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:33:31.260534 containerd[1991]: time="2025-11-24T00:33:31.260492226Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.918760038s" Nov 24 00:33:31.260701 containerd[1991]: time="2025-11-24T00:33:31.260681012Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 24 00:33:31.261641 containerd[1991]: time="2025-11-24T00:33:31.261605681Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 24 00:33:31.720977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2251487907.mount: Deactivated successfully. Nov 24 00:33:31.732805 containerd[1991]: time="2025-11-24T00:33:31.732739774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:33:31.734913 containerd[1991]: time="2025-11-24T00:33:31.734867965Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 24 00:33:31.738034 containerd[1991]: time="2025-11-24T00:33:31.736961074Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:33:31.740552 containerd[1991]: time="2025-11-24T00:33:31.740491302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:33:31.741830 containerd[1991]: time="2025-11-24T00:33:31.741779380Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 480.134501ms" Nov 24 00:33:31.741830 containerd[1991]: time="2025-11-24T00:33:31.741817863Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 24 00:33:31.742840 containerd[1991]: time="2025-11-24T00:33:31.742792841Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 24 00:33:32.291619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount39626936.mount: Deactivated successfully. Nov 24 00:33:34.876232 containerd[1991]: time="2025-11-24T00:33:34.876167240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:33:34.877129 containerd[1991]: time="2025-11-24T00:33:34.877082076Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 24 00:33:34.878050 containerd[1991]: time="2025-11-24T00:33:34.877998321Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:33:34.881473 containerd[1991]: time="2025-11-24T00:33:34.881360284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:33:34.882181 containerd[1991]: time="2025-11-24T00:33:34.882049500Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.139210232s" Nov 24 00:33:34.882181 containerd[1991]: time="2025-11-24T00:33:34.882079282Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 24 00:33:34.984980 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 24 00:33:37.338463 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:33:37.338728 systemd[1]: kubelet.service: Consumed 223ms CPU time, 110.6M memory peak. Nov 24 00:33:37.341514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:33:37.377193 systemd[1]: Reload requested from client PID 2841 ('systemctl') (unit session-9.scope)... Nov 24 00:33:37.377215 systemd[1]: Reloading... Nov 24 00:33:37.526483 zram_generator::config[2885]: No configuration found. Nov 24 00:33:37.817116 systemd[1]: Reloading finished in 439 ms. Nov 24 00:33:37.872224 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 24 00:33:37.872335 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 24 00:33:37.872728 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:33:37.872828 systemd[1]: kubelet.service: Consumed 145ms CPU time, 97.6M memory peak. Nov 24 00:33:37.875048 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:33:38.150298 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:33:38.162006 (kubelet)[2948]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 00:33:38.311474 kubelet[2948]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:33:38.311474 kubelet[2948]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 00:33:38.311474 kubelet[2948]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:33:38.311474 kubelet[2948]: I1124 00:33:38.310945 2948 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 00:33:38.854035 kubelet[2948]: I1124 00:33:38.853981 2948 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 24 00:33:38.854035 kubelet[2948]: I1124 00:33:38.854016 2948 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 00:33:38.854325 kubelet[2948]: I1124 00:33:38.854301 2948 server.go:954] "Client rotation is on, will bootstrap in background" Nov 24 00:33:38.893475 kubelet[2948]: E1124 00:33:38.892093 2948 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.20.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.18:6443: connect: connection refused" logger="UnhandledError" Nov 24 00:33:38.893981 kubelet[2948]: I1124 00:33:38.893918 2948 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 00:33:38.923103 kubelet[2948]: I1124 00:33:38.923062 2948 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 00:33:38.929728 kubelet[2948]: I1124 00:33:38.929694 2948 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 00:33:38.936221 kubelet[2948]: I1124 00:33:38.936151 2948 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 00:33:38.936440 kubelet[2948]: I1124 00:33:38.936220 2948 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-18","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 00:33:38.938560 kubelet[2948]: I1124 00:33:38.938519 2948 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 00:33:38.938560 kubelet[2948]: I1124 00:33:38.938552 2948 container_manager_linux.go:304] "Creating device plugin manager" Nov 24 00:33:38.940190 kubelet[2948]: I1124 00:33:38.940141 2948 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:33:38.945375 kubelet[2948]: I1124 00:33:38.945339 2948 kubelet.go:446] "Attempting to sync node with API server" Nov 24 00:33:38.945375 kubelet[2948]: I1124 00:33:38.945382 2948 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 00:33:38.947542 kubelet[2948]: I1124 00:33:38.947270 2948 kubelet.go:352] "Adding apiserver pod source" Nov 24 00:33:38.947542 kubelet[2948]: I1124 00:33:38.947302 2948 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 00:33:38.953980 kubelet[2948]: W1124 00:33:38.953926 2948 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-18&limit=500&resourceVersion=0": dial tcp 172.31.20.18:6443: connect: connection refused Nov 24 00:33:38.954103 kubelet[2948]: E1124 00:33:38.953989 2948 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-18&limit=500&resourceVersion=0\": dial tcp 172.31.20.18:6443: connect: connection refused" logger="UnhandledError" Nov 24 00:33:38.954401 kubelet[2948]: W1124 00:33:38.954358 2948 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.20.18:6443: connect: connection refused Nov 24 00:33:38.954464 kubelet[2948]: E1124 00:33:38.954409 2948 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.18:6443: connect: connection refused" logger="UnhandledError" Nov 24 00:33:38.954841 kubelet[2948]: I1124 00:33:38.954805 2948 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 00:33:38.963409 kubelet[2948]: I1124 00:33:38.963266 2948 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 24 00:33:38.965189 kubelet[2948]: W1124 00:33:38.964252 2948 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 24 00:33:38.965471 kubelet[2948]: I1124 00:33:38.965410 2948 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 00:33:38.965471 kubelet[2948]: I1124 00:33:38.965465 2948 server.go:1287] "Started kubelet" Nov 24 00:33:38.970229 kubelet[2948]: I1124 00:33:38.969728 2948 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 00:33:38.971116 kubelet[2948]: I1124 00:33:38.970687 2948 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 00:33:38.971116 kubelet[2948]: I1124 00:33:38.971056 2948 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 00:33:38.973015 kubelet[2948]: I1124 00:33:38.972978 2948 server.go:479] "Adding debug handlers to kubelet server" Nov 24 00:33:38.979651 kubelet[2948]: E1124 00:33:38.972565 2948 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.18:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.18:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-18.187aca1a6ff78809 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-18,UID:ip-172-31-20-18,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-18,},FirstTimestamp:2025-11-24 00:33:38.965424137 +0000 UTC m=+0.799221408,LastTimestamp:2025-11-24 00:33:38.965424137 +0000 UTC m=+0.799221408,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-18,}" Nov 24 00:33:38.982668 kubelet[2948]: I1124 00:33:38.982501 2948 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 00:33:38.984342 kubelet[2948]: I1124 00:33:38.984009 2948 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 00:33:38.993786 kubelet[2948]: I1124 00:33:38.992020 2948 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 00:33:38.993786 kubelet[2948]: E1124 00:33:38.992274 2948 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-18\" not found" Nov 24 00:33:38.993786 kubelet[2948]: I1124 00:33:38.992676 2948 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 00:33:38.993786 kubelet[2948]: I1124 00:33:38.992754 2948 reconciler.go:26] "Reconciler: start to sync state" Nov 24 00:33:38.993786 kubelet[2948]: W1124 00:33:38.993113 2948 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.18:6443: connect: connection refused Nov 24 00:33:38.993786 kubelet[2948]: E1124 00:33:38.993167 2948 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.18:6443: connect: connection refused" logger="UnhandledError" Nov 24 00:33:38.993786 kubelet[2948]: E1124 00:33:38.993239 2948 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-18?timeout=10s\": dial tcp 172.31.20.18:6443: connect: connection refused" interval="200ms" Nov 24 00:33:38.995000 kubelet[2948]: I1124 00:33:38.994978 2948 factory.go:221] Registration of the systemd container factory successfully Nov 24 00:33:38.995814 kubelet[2948]: E1124 00:33:38.995420 2948 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 00:33:38.996165 kubelet[2948]: I1124 00:33:38.996069 2948 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 00:33:38.998998 kubelet[2948]: I1124 00:33:38.998968 2948 factory.go:221] Registration of the containerd container factory successfully Nov 24 00:33:39.030189 kubelet[2948]: I1124 00:33:39.030167 2948 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 00:33:39.030619 kubelet[2948]: I1124 00:33:39.030330 2948 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 00:33:39.030619 kubelet[2948]: I1124 00:33:39.030353 2948 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:33:39.033475 kubelet[2948]: I1124 00:33:39.033305 2948 policy_none.go:49] "None policy: Start" Nov 24 00:33:39.033475 kubelet[2948]: I1124 00:33:39.033339 2948 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 00:33:39.033475 kubelet[2948]: I1124 00:33:39.033356 2948 state_mem.go:35] "Initializing new in-memory state store" Nov 24 00:33:39.038327 kubelet[2948]: I1124 00:33:39.038258 2948 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 24 00:33:39.040495 kubelet[2948]: I1124 00:33:39.040438 2948 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 24 00:33:39.040974 kubelet[2948]: I1124 00:33:39.040626 2948 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 24 00:33:39.040974 kubelet[2948]: I1124 00:33:39.040662 2948 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 00:33:39.040974 kubelet[2948]: I1124 00:33:39.040672 2948 kubelet.go:2382] "Starting kubelet main sync loop" Nov 24 00:33:39.040974 kubelet[2948]: E1124 00:33:39.040728 2948 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 00:33:39.047087 kubelet[2948]: W1124 00:33:39.047049 2948 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.18:6443: connect: connection refused Nov 24 00:33:39.047266 kubelet[2948]: E1124 00:33:39.047096 2948 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.18:6443: connect: connection refused" logger="UnhandledError" Nov 24 00:33:39.049736 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 24 00:33:39.064766 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 24 00:33:39.070769 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 24 00:33:39.082253 kubelet[2948]: I1124 00:33:39.081400 2948 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 24 00:33:39.082253 kubelet[2948]: I1124 00:33:39.081663 2948 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 00:33:39.082253 kubelet[2948]: I1124 00:33:39.081685 2948 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 00:33:39.082474 kubelet[2948]: I1124 00:33:39.082333 2948 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 00:33:39.085221 kubelet[2948]: E1124 00:33:39.085173 2948 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 00:33:39.085357 kubelet[2948]: E1124 00:33:39.085246 2948 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-18\" not found" Nov 24 00:33:39.152131 systemd[1]: Created slice kubepods-burstable-pod2c3ae435507f18c463f1aa76dfd308a0.slice - libcontainer container kubepods-burstable-pod2c3ae435507f18c463f1aa76dfd308a0.slice. Nov 24 00:33:39.161276 kubelet[2948]: E1124 00:33:39.161157 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-18\" not found" node="ip-172-31-20-18" Nov 24 00:33:39.168229 systemd[1]: Created slice kubepods-burstable-podab2043b6fc35922bfda40cc9976e04fc.slice - libcontainer container kubepods-burstable-podab2043b6fc35922bfda40cc9976e04fc.slice. Nov 24 00:33:39.171464 kubelet[2948]: E1124 00:33:39.171411 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-18\" not found" node="ip-172-31-20-18" Nov 24 00:33:39.174001 systemd[1]: Created slice kubepods-burstable-pod0d4480a45ff1d7af19f7e2b900e73450.slice - libcontainer container kubepods-burstable-pod0d4480a45ff1d7af19f7e2b900e73450.slice. Nov 24 00:33:39.176284 kubelet[2948]: E1124 00:33:39.176253 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-18\" not found" node="ip-172-31-20-18" Nov 24 00:33:39.185859 kubelet[2948]: I1124 00:33:39.185825 2948 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-18" Nov 24 00:33:39.186402 kubelet[2948]: E1124 00:33:39.186366 2948 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.18:6443/api/v1/nodes\": dial tcp 172.31.20.18:6443: connect: connection refused" node="ip-172-31-20-18" Nov 24 00:33:39.194047 kubelet[2948]: I1124 00:33:39.193819 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab2043b6fc35922bfda40cc9976e04fc-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-18\" (UID: \"ab2043b6fc35922bfda40cc9976e04fc\") " pod="kube-system/kube-controller-manager-ip-172-31-20-18" Nov 24 00:33:39.194047 kubelet[2948]: I1124 00:33:39.193856 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab2043b6fc35922bfda40cc9976e04fc-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-18\" (UID: \"ab2043b6fc35922bfda40cc9976e04fc\") " pod="kube-system/kube-controller-manager-ip-172-31-20-18" Nov 24 00:33:39.194047 kubelet[2948]: I1124 00:33:39.193878 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c3ae435507f18c463f1aa76dfd308a0-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-18\" (UID: \"2c3ae435507f18c463f1aa76dfd308a0\") " pod="kube-system/kube-apiserver-ip-172-31-20-18" Nov 24 00:33:39.194047 kubelet[2948]: I1124 00:33:39.193894 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c3ae435507f18c463f1aa76dfd308a0-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-18\" (UID: \"2c3ae435507f18c463f1aa76dfd308a0\") " pod="kube-system/kube-apiserver-ip-172-31-20-18" Nov 24 00:33:39.194047 kubelet[2948]: I1124 00:33:39.193911 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ab2043b6fc35922bfda40cc9976e04fc-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-18\" (UID: \"ab2043b6fc35922bfda40cc9976e04fc\") " pod="kube-system/kube-controller-manager-ip-172-31-20-18" Nov 24 00:33:39.194285 kubelet[2948]: I1124 00:33:39.193925 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab2043b6fc35922bfda40cc9976e04fc-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-18\" (UID: \"ab2043b6fc35922bfda40cc9976e04fc\") " pod="kube-system/kube-controller-manager-ip-172-31-20-18" Nov 24 00:33:39.194285 kubelet[2948]: I1124 00:33:39.193939 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab2043b6fc35922bfda40cc9976e04fc-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-18\" (UID: \"ab2043b6fc35922bfda40cc9976e04fc\") " pod="kube-system/kube-controller-manager-ip-172-31-20-18" Nov 24 00:33:39.194285 kubelet[2948]: I1124 00:33:39.193954 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d4480a45ff1d7af19f7e2b900e73450-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-18\" (UID: \"0d4480a45ff1d7af19f7e2b900e73450\") " pod="kube-system/kube-scheduler-ip-172-31-20-18" Nov 24 00:33:39.194285 kubelet[2948]: I1124 00:33:39.193971 2948 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c3ae435507f18c463f1aa76dfd308a0-ca-certs\") pod \"kube-apiserver-ip-172-31-20-18\" (UID: \"2c3ae435507f18c463f1aa76dfd308a0\") " pod="kube-system/kube-apiserver-ip-172-31-20-18" Nov 24 00:33:39.194285 kubelet[2948]: E1124 00:33:39.194005 2948 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-18?timeout=10s\": dial tcp 172.31.20.18:6443: connect: connection refused" interval="400ms" Nov 24 00:33:39.389190 kubelet[2948]: I1124 00:33:39.389092 2948 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-18" Nov 24 00:33:39.389661 kubelet[2948]: E1124 00:33:39.389606 2948 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.18:6443/api/v1/nodes\": dial tcp 172.31.20.18:6443: connect: connection refused" node="ip-172-31-20-18" Nov 24 00:33:39.464615 containerd[1991]: time="2025-11-24T00:33:39.464477178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-18,Uid:2c3ae435507f18c463f1aa76dfd308a0,Namespace:kube-system,Attempt:0,}" Nov 24 00:33:39.481487 containerd[1991]: time="2025-11-24T00:33:39.481393251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-18,Uid:ab2043b6fc35922bfda40cc9976e04fc,Namespace:kube-system,Attempt:0,}" Nov 24 00:33:39.481893 containerd[1991]: time="2025-11-24T00:33:39.481734689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-18,Uid:0d4480a45ff1d7af19f7e2b900e73450,Namespace:kube-system,Attempt:0,}" Nov 24 00:33:39.601792 kubelet[2948]: E1124 00:33:39.600267 2948 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-18?timeout=10s\": dial tcp 172.31.20.18:6443: connect: connection refused" interval="800ms" Nov 24 00:33:39.661841 containerd[1991]: time="2025-11-24T00:33:39.659607309Z" level=info msg="connecting to shim 5cfc4743840d25a16549e4d6671c5541c2766c337b18c9f12cd8feae1ba8c0a4" address="unix:///run/containerd/s/66e81965a870d5656e32714aa49f0838d3d3ba805da746b4ac2964c3ed690423" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:33:39.662668 containerd[1991]: time="2025-11-24T00:33:39.662615383Z" level=info msg="connecting to shim b9c3d2c40fe5582557bfffa780d21e51b157fe7b7e68582b3a85f21b02a669ed" address="unix:///run/containerd/s/41d8308cfbe50d144b4a72bbb83e6ffa9a3b100aff59b339a31df4c5c6d72be1" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:33:39.669702 containerd[1991]: time="2025-11-24T00:33:39.669645065Z" level=info msg="connecting to shim 370651d704632d2a7e5798469a04563836bb1bfa36dd146d664d9c1ae5e0ed9c" address="unix:///run/containerd/s/ecd351aa0502cd1df828bee234b23ca97ede858165f861794db925031c46c057" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:33:39.794510 kubelet[2948]: I1124 00:33:39.794113 2948 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-18" Nov 24 00:33:39.795591 kubelet[2948]: E1124 00:33:39.795493 2948 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.18:6443/api/v1/nodes\": dial tcp 172.31.20.18:6443: connect: connection refused" node="ip-172-31-20-18" Nov 24 00:33:39.807109 systemd[1]: Started cri-containerd-370651d704632d2a7e5798469a04563836bb1bfa36dd146d664d9c1ae5e0ed9c.scope - libcontainer container 370651d704632d2a7e5798469a04563836bb1bfa36dd146d664d9c1ae5e0ed9c. Nov 24 00:33:39.822991 systemd[1]: Started cri-containerd-5cfc4743840d25a16549e4d6671c5541c2766c337b18c9f12cd8feae1ba8c0a4.scope - libcontainer container 5cfc4743840d25a16549e4d6671c5541c2766c337b18c9f12cd8feae1ba8c0a4. Nov 24 00:33:39.826186 systemd[1]: Started cri-containerd-b9c3d2c40fe5582557bfffa780d21e51b157fe7b7e68582b3a85f21b02a669ed.scope - libcontainer container b9c3d2c40fe5582557bfffa780d21e51b157fe7b7e68582b3a85f21b02a669ed. Nov 24 00:33:39.915314 kubelet[2948]: W1124 00:33:39.915170 2948 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-18&limit=500&resourceVersion=0": dial tcp 172.31.20.18:6443: connect: connection refused Nov 24 00:33:39.915659 kubelet[2948]: E1124 00:33:39.915284 2948 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-18&limit=500&resourceVersion=0\": dial tcp 172.31.20.18:6443: connect: connection refused" logger="UnhandledError" Nov 24 00:33:39.953289 containerd[1991]: time="2025-11-24T00:33:39.953218927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-18,Uid:ab2043b6fc35922bfda40cc9976e04fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9c3d2c40fe5582557bfffa780d21e51b157fe7b7e68582b3a85f21b02a669ed\"" Nov 24 00:33:39.953907 containerd[1991]: time="2025-11-24T00:33:39.953720957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-18,Uid:2c3ae435507f18c463f1aa76dfd308a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cfc4743840d25a16549e4d6671c5541c2766c337b18c9f12cd8feae1ba8c0a4\"" Nov 24 00:33:39.955431 containerd[1991]: time="2025-11-24T00:33:39.955300563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-18,Uid:0d4480a45ff1d7af19f7e2b900e73450,Namespace:kube-system,Attempt:0,} returns sandbox id \"370651d704632d2a7e5798469a04563836bb1bfa36dd146d664d9c1ae5e0ed9c\"" Nov 24 00:33:39.959325 containerd[1991]: time="2025-11-24T00:33:39.959260148Z" level=info msg="CreateContainer within sandbox \"b9c3d2c40fe5582557bfffa780d21e51b157fe7b7e68582b3a85f21b02a669ed\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 24 00:33:39.961487 containerd[1991]: time="2025-11-24T00:33:39.960924336Z" level=info msg="CreateContainer within sandbox \"370651d704632d2a7e5798469a04563836bb1bfa36dd146d664d9c1ae5e0ed9c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 24 00:33:39.962599 containerd[1991]: time="2025-11-24T00:33:39.962562889Z" level=info msg="CreateContainer within sandbox \"5cfc4743840d25a16549e4d6671c5541c2766c337b18c9f12cd8feae1ba8c0a4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 24 00:33:39.983735 containerd[1991]: time="2025-11-24T00:33:39.983208539Z" level=info msg="Container 7295b8cd26bbbdf10fffa77d35133d3496cfebaae3abdc2a2704558aba969260: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:33:39.989216 containerd[1991]: time="2025-11-24T00:33:39.989173024Z" level=info msg="Container b8e6f58675c3f3c078529bc016c3cc086953f0a4305106134ec3e711e107eccf: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:33:39.996664 containerd[1991]: time="2025-11-24T00:33:39.996617985Z" level=info msg="Container e058e59f475bd8adc59bf042be5fa0c2ce512c1f37b2b8ef492f22a12f2a31ab: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:33:40.008673 containerd[1991]: time="2025-11-24T00:33:40.008619371Z" level=info msg="CreateContainer within sandbox \"370651d704632d2a7e5798469a04563836bb1bfa36dd146d664d9c1ae5e0ed9c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7295b8cd26bbbdf10fffa77d35133d3496cfebaae3abdc2a2704558aba969260\"" Nov 24 00:33:40.009418 containerd[1991]: time="2025-11-24T00:33:40.009376772Z" level=info msg="StartContainer for \"7295b8cd26bbbdf10fffa77d35133d3496cfebaae3abdc2a2704558aba969260\"" Nov 24 00:33:40.012796 containerd[1991]: time="2025-11-24T00:33:40.012754321Z" level=info msg="connecting to shim 7295b8cd26bbbdf10fffa77d35133d3496cfebaae3abdc2a2704558aba969260" address="unix:///run/containerd/s/ecd351aa0502cd1df828bee234b23ca97ede858165f861794db925031c46c057" protocol=ttrpc version=3 Nov 24 00:33:40.022805 containerd[1991]: time="2025-11-24T00:33:40.022754553Z" level=info msg="CreateContainer within sandbox \"b9c3d2c40fe5582557bfffa780d21e51b157fe7b7e68582b3a85f21b02a669ed\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b8e6f58675c3f3c078529bc016c3cc086953f0a4305106134ec3e711e107eccf\"" Nov 24 00:33:40.024744 containerd[1991]: time="2025-11-24T00:33:40.024709514Z" level=info msg="StartContainer for \"b8e6f58675c3f3c078529bc016c3cc086953f0a4305106134ec3e711e107eccf\"" Nov 24 00:33:40.026670 containerd[1991]: time="2025-11-24T00:33:40.026631295Z" level=info msg="connecting to shim b8e6f58675c3f3c078529bc016c3cc086953f0a4305106134ec3e711e107eccf" address="unix:///run/containerd/s/41d8308cfbe50d144b4a72bbb83e6ffa9a3b100aff59b339a31df4c5c6d72be1" protocol=ttrpc version=3 Nov 24 00:33:40.029039 kubelet[2948]: W1124 00:33:40.028965 2948 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.18:6443: connect: connection refused Nov 24 00:33:40.029039 kubelet[2948]: E1124 00:33:40.029020 2948 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.18:6443: connect: connection refused" logger="UnhandledError" Nov 24 00:33:40.034573 kubelet[2948]: W1124 00:33:40.034412 2948 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.18:6443: connect: connection refused Nov 24 00:33:40.034573 kubelet[2948]: E1124 00:33:40.034530 2948 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.18:6443: connect: connection refused" logger="UnhandledError" Nov 24 00:33:40.035699 containerd[1991]: time="2025-11-24T00:33:40.035590955Z" level=info msg="CreateContainer within sandbox \"5cfc4743840d25a16549e4d6671c5541c2766c337b18c9f12cd8feae1ba8c0a4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e058e59f475bd8adc59bf042be5fa0c2ce512c1f37b2b8ef492f22a12f2a31ab\"" Nov 24 00:33:40.048023 containerd[1991]: time="2025-11-24T00:33:40.047861893Z" level=info msg="StartContainer for \"e058e59f475bd8adc59bf042be5fa0c2ce512c1f37b2b8ef492f22a12f2a31ab\"" Nov 24 00:33:40.049653 systemd[1]: Started cri-containerd-7295b8cd26bbbdf10fffa77d35133d3496cfebaae3abdc2a2704558aba969260.scope - libcontainer container 7295b8cd26bbbdf10fffa77d35133d3496cfebaae3abdc2a2704558aba969260. Nov 24 00:33:40.066703 systemd[1]: Started cri-containerd-b8e6f58675c3f3c078529bc016c3cc086953f0a4305106134ec3e711e107eccf.scope - libcontainer container b8e6f58675c3f3c078529bc016c3cc086953f0a4305106134ec3e711e107eccf. Nov 24 00:33:40.071279 containerd[1991]: time="2025-11-24T00:33:40.071181760Z" level=info msg="connecting to shim e058e59f475bd8adc59bf042be5fa0c2ce512c1f37b2b8ef492f22a12f2a31ab" address="unix:///run/containerd/s/66e81965a870d5656e32714aa49f0838d3d3ba805da746b4ac2964c3ed690423" protocol=ttrpc version=3 Nov 24 00:33:40.138778 systemd[1]: Started cri-containerd-e058e59f475bd8adc59bf042be5fa0c2ce512c1f37b2b8ef492f22a12f2a31ab.scope - libcontainer container e058e59f475bd8adc59bf042be5fa0c2ce512c1f37b2b8ef492f22a12f2a31ab. Nov 24 00:33:40.159938 containerd[1991]: time="2025-11-24T00:33:40.159280464Z" level=info msg="StartContainer for \"7295b8cd26bbbdf10fffa77d35133d3496cfebaae3abdc2a2704558aba969260\" returns successfully" Nov 24 00:33:40.206820 containerd[1991]: time="2025-11-24T00:33:40.206776195Z" level=info msg="StartContainer for \"b8e6f58675c3f3c078529bc016c3cc086953f0a4305106134ec3e711e107eccf\" returns successfully" Nov 24 00:33:40.231410 kubelet[2948]: W1124 00:33:40.231335 2948 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.20.18:6443: connect: connection refused Nov 24 00:33:40.231585 kubelet[2948]: E1124 00:33:40.231425 2948 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.18:6443: connect: connection refused" logger="UnhandledError" Nov 24 00:33:40.264302 containerd[1991]: time="2025-11-24T00:33:40.264166836Z" level=info msg="StartContainer for \"e058e59f475bd8adc59bf042be5fa0c2ce512c1f37b2b8ef492f22a12f2a31ab\" returns successfully" Nov 24 00:33:40.401546 kubelet[2948]: E1124 00:33:40.401406 2948 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-18?timeout=10s\": dial tcp 172.31.20.18:6443: connect: connection refused" interval="1.6s" Nov 24 00:33:40.601580 kubelet[2948]: I1124 00:33:40.601052 2948 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-18" Nov 24 00:33:40.602732 kubelet[2948]: E1124 00:33:40.602692 2948 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.18:6443/api/v1/nodes\": dial tcp 172.31.20.18:6443: connect: connection refused" node="ip-172-31-20-18" Nov 24 00:33:41.124239 kubelet[2948]: E1124 00:33:41.123523 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-18\" not found" node="ip-172-31-20-18" Nov 24 00:33:41.138750 kubelet[2948]: E1124 00:33:41.138713 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-18\" not found" node="ip-172-31-20-18" Nov 24 00:33:41.139431 kubelet[2948]: E1124 00:33:41.139410 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-18\" not found" node="ip-172-31-20-18" Nov 24 00:33:42.134305 kubelet[2948]: E1124 00:33:42.134269 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-18\" not found" node="ip-172-31-20-18" Nov 24 00:33:42.134816 kubelet[2948]: E1124 00:33:42.134747 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-18\" not found" node="ip-172-31-20-18" Nov 24 00:33:42.135380 kubelet[2948]: E1124 00:33:42.135355 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-18\" not found" node="ip-172-31-20-18" Nov 24 00:33:42.205282 kubelet[2948]: I1124 00:33:42.205244 2948 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-18" Nov 24 00:33:43.137245 kubelet[2948]: E1124 00:33:43.137210 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-18\" not found" node="ip-172-31-20-18" Nov 24 00:33:43.138496 kubelet[2948]: E1124 00:33:43.138434 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-18\" not found" node="ip-172-31-20-18" Nov 24 00:33:43.138797 kubelet[2948]: E1124 00:33:43.138778 2948 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-18\" not found" node="ip-172-31-20-18" Nov 24 00:33:43.273824 kubelet[2948]: I1124 00:33:43.273781 2948 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-18" Nov 24 00:33:43.273975 kubelet[2948]: E1124 00:33:43.273835 2948 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-20-18\": node \"ip-172-31-20-18\" not found" Nov 24 00:33:43.293602 kubelet[2948]: I1124 00:33:43.292841 2948 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-18" Nov 24 00:33:43.343982 kubelet[2948]: E1124 00:33:43.343939 2948 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-20-18\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-20-18" Nov 24 00:33:43.343982 kubelet[2948]: I1124 00:33:43.343983 2948 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-18" Nov 24 00:33:43.350733 kubelet[2948]: E1124 00:33:43.350693 2948 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-18\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-20-18" Nov 24 00:33:43.350733 kubelet[2948]: I1124 00:33:43.350733 2948 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-18" Nov 24 00:33:43.355842 kubelet[2948]: E1124 00:33:43.355790 2948 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-18\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-20-18" Nov 24 00:33:43.958240 kubelet[2948]: I1124 00:33:43.957907 2948 apiserver.go:52] "Watching apiserver" Nov 24 00:33:43.993787 kubelet[2948]: I1124 00:33:43.993742 2948 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 00:33:45.633244 systemd[1]: Reload requested from client PID 3224 ('systemctl') (unit session-9.scope)... Nov 24 00:33:45.633268 systemd[1]: Reloading... Nov 24 00:33:45.774483 zram_generator::config[3274]: No configuration found. Nov 24 00:33:46.133799 systemd[1]: Reloading finished in 499 ms. Nov 24 00:33:46.166629 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:33:46.189188 systemd[1]: kubelet.service: Deactivated successfully. Nov 24 00:33:46.189473 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:33:46.189537 systemd[1]: kubelet.service: Consumed 1.157s CPU time, 128M memory peak. Nov 24 00:33:46.191587 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:33:46.466853 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:33:46.472432 (kubelet)[3328]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 00:33:46.550244 kubelet[3328]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:33:46.550244 kubelet[3328]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 00:33:46.550244 kubelet[3328]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:33:46.552153 kubelet[3328]: I1124 00:33:46.550381 3328 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 00:33:46.566078 kubelet[3328]: I1124 00:33:46.565984 3328 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 24 00:33:46.566078 kubelet[3328]: I1124 00:33:46.566033 3328 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 00:33:46.567215 kubelet[3328]: I1124 00:33:46.567163 3328 server.go:954] "Client rotation is on, will bootstrap in background" Nov 24 00:33:46.572870 kubelet[3328]: I1124 00:33:46.572784 3328 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 24 00:33:46.586421 kubelet[3328]: I1124 00:33:46.586368 3328 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 00:33:46.591248 kubelet[3328]: I1124 00:33:46.591217 3328 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 00:33:46.594754 kubelet[3328]: I1124 00:33:46.594725 3328 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 00:33:46.595025 kubelet[3328]: I1124 00:33:46.594979 3328 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 00:33:46.595235 kubelet[3328]: I1124 00:33:46.595017 3328 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-18","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 00:33:46.595363 kubelet[3328]: I1124 00:33:46.595304 3328 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 00:33:46.595363 kubelet[3328]: I1124 00:33:46.595324 3328 container_manager_linux.go:304] "Creating device plugin manager" Nov 24 00:33:46.595474 kubelet[3328]: I1124 00:33:46.595395 3328 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:33:46.597170 kubelet[3328]: I1124 00:33:46.595617 3328 kubelet.go:446] "Attempting to sync node with API server" Nov 24 00:33:46.597170 kubelet[3328]: I1124 00:33:46.595686 3328 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 00:33:46.597170 kubelet[3328]: I1124 00:33:46.595737 3328 kubelet.go:352] "Adding apiserver pod source" Nov 24 00:33:46.597170 kubelet[3328]: I1124 00:33:46.595752 3328 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 00:33:46.599962 kubelet[3328]: I1124 00:33:46.599663 3328 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 00:33:46.601464 kubelet[3328]: I1124 00:33:46.600423 3328 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 24 00:33:46.601911 kubelet[3328]: I1124 00:33:46.601890 3328 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 00:33:46.601994 kubelet[3328]: I1124 00:33:46.601938 3328 server.go:1287] "Started kubelet" Nov 24 00:33:46.608828 kubelet[3328]: I1124 00:33:46.608774 3328 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 00:33:46.610866 kubelet[3328]: I1124 00:33:46.610838 3328 server.go:479] "Adding debug handlers to kubelet server" Nov 24 00:33:46.618038 kubelet[3328]: I1124 00:33:46.617961 3328 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 00:33:46.618523 kubelet[3328]: I1124 00:33:46.618505 3328 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 00:33:46.619909 kubelet[3328]: I1124 00:33:46.619890 3328 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 00:33:46.628266 kubelet[3328]: I1124 00:33:46.628227 3328 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 00:33:46.660311 kubelet[3328]: I1124 00:33:46.660139 3328 factory.go:221] Registration of the systemd container factory successfully Nov 24 00:33:46.660614 kubelet[3328]: I1124 00:33:46.660586 3328 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 00:33:46.662000 kubelet[3328]: I1124 00:33:46.629877 3328 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 00:33:46.662737 kubelet[3328]: I1124 00:33:46.629854 3328 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 00:33:46.662737 kubelet[3328]: E1124 00:33:46.630095 3328 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-18\" not found" Nov 24 00:33:46.662737 kubelet[3328]: I1124 00:33:46.662570 3328 reconciler.go:26] "Reconciler: start to sync state" Nov 24 00:33:46.664015 kubelet[3328]: I1124 00:33:46.663972 3328 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 24 00:33:46.669044 kubelet[3328]: I1124 00:33:46.668301 3328 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 24 00:33:46.669044 kubelet[3328]: I1124 00:33:46.668363 3328 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 24 00:33:46.669044 kubelet[3328]: I1124 00:33:46.668391 3328 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 00:33:46.669044 kubelet[3328]: I1124 00:33:46.668401 3328 kubelet.go:2382] "Starting kubelet main sync loop" Nov 24 00:33:46.669044 kubelet[3328]: E1124 00:33:46.668485 3328 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 00:33:46.693752 kubelet[3328]: I1124 00:33:46.693511 3328 factory.go:221] Registration of the containerd container factory successfully Nov 24 00:33:46.698082 kubelet[3328]: E1124 00:33:46.697616 3328 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 00:33:46.769289 kubelet[3328]: E1124 00:33:46.769256 3328 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 24 00:33:46.773150 kubelet[3328]: I1124 00:33:46.771799 3328 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 00:33:46.773150 kubelet[3328]: I1124 00:33:46.771833 3328 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 00:33:46.773150 kubelet[3328]: I1124 00:33:46.771956 3328 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:33:46.773150 kubelet[3328]: I1124 00:33:46.772661 3328 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 24 00:33:46.773150 kubelet[3328]: I1124 00:33:46.772679 3328 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 24 00:33:46.773150 kubelet[3328]: I1124 00:33:46.772707 3328 policy_none.go:49] "None policy: Start" Nov 24 00:33:46.773150 kubelet[3328]: I1124 00:33:46.772815 3328 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 00:33:46.773150 kubelet[3328]: I1124 00:33:46.772831 3328 state_mem.go:35] "Initializing new in-memory state store" Nov 24 00:33:46.773564 kubelet[3328]: I1124 00:33:46.773178 3328 state_mem.go:75] "Updated machine memory state" Nov 24 00:33:46.780104 kubelet[3328]: I1124 00:33:46.779808 3328 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 24 00:33:46.780314 kubelet[3328]: I1124 00:33:46.780287 3328 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 00:33:46.780397 kubelet[3328]: I1124 00:33:46.780306 3328 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 00:33:46.782759 kubelet[3328]: I1124 00:33:46.781826 3328 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 00:33:46.792733 kubelet[3328]: E1124 00:33:46.792573 3328 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 00:33:46.913691 kubelet[3328]: I1124 00:33:46.913221 3328 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-18" Nov 24 00:33:46.932684 kubelet[3328]: I1124 00:33:46.932617 3328 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-20-18" Nov 24 00:33:46.932832 kubelet[3328]: I1124 00:33:46.932709 3328 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-18" Nov 24 00:33:46.972844 kubelet[3328]: I1124 00:33:46.972560 3328 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-18" Nov 24 00:33:46.974018 kubelet[3328]: I1124 00:33:46.973637 3328 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-18" Nov 24 00:33:46.974018 kubelet[3328]: I1124 00:33:46.973934 3328 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-18" Nov 24 00:33:47.070047 kubelet[3328]: I1124 00:33:47.068944 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab2043b6fc35922bfda40cc9976e04fc-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-18\" (UID: \"ab2043b6fc35922bfda40cc9976e04fc\") " pod="kube-system/kube-controller-manager-ip-172-31-20-18" Nov 24 00:33:47.070047 kubelet[3328]: I1124 00:33:47.069001 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab2043b6fc35922bfda40cc9976e04fc-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-18\" (UID: \"ab2043b6fc35922bfda40cc9976e04fc\") " pod="kube-system/kube-controller-manager-ip-172-31-20-18" Nov 24 00:33:47.070047 kubelet[3328]: I1124 00:33:47.069023 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab2043b6fc35922bfda40cc9976e04fc-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-18\" (UID: \"ab2043b6fc35922bfda40cc9976e04fc\") " pod="kube-system/kube-controller-manager-ip-172-31-20-18" Nov 24 00:33:47.070047 kubelet[3328]: I1124 00:33:47.069166 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ab2043b6fc35922bfda40cc9976e04fc-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-18\" (UID: \"ab2043b6fc35922bfda40cc9976e04fc\") " pod="kube-system/kube-controller-manager-ip-172-31-20-18" Nov 24 00:33:47.070047 kubelet[3328]: I1124 00:33:47.069191 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d4480a45ff1d7af19f7e2b900e73450-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-18\" (UID: \"0d4480a45ff1d7af19f7e2b900e73450\") " pod="kube-system/kube-scheduler-ip-172-31-20-18" Nov 24 00:33:47.070369 kubelet[3328]: I1124 00:33:47.069209 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c3ae435507f18c463f1aa76dfd308a0-ca-certs\") pod \"kube-apiserver-ip-172-31-20-18\" (UID: \"2c3ae435507f18c463f1aa76dfd308a0\") " pod="kube-system/kube-apiserver-ip-172-31-20-18" Nov 24 00:33:47.070369 kubelet[3328]: I1124 00:33:47.069230 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c3ae435507f18c463f1aa76dfd308a0-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-18\" (UID: \"2c3ae435507f18c463f1aa76dfd308a0\") " pod="kube-system/kube-apiserver-ip-172-31-20-18" Nov 24 00:33:47.079179 kubelet[3328]: I1124 00:33:47.069405 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c3ae435507f18c463f1aa76dfd308a0-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-18\" (UID: \"2c3ae435507f18c463f1aa76dfd308a0\") " pod="kube-system/kube-apiserver-ip-172-31-20-18" Nov 24 00:33:47.079744 kubelet[3328]: I1124 00:33:47.079387 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab2043b6fc35922bfda40cc9976e04fc-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-18\" (UID: \"ab2043b6fc35922bfda40cc9976e04fc\") " pod="kube-system/kube-controller-manager-ip-172-31-20-18" Nov 24 00:33:47.596877 kubelet[3328]: I1124 00:33:47.596834 3328 apiserver.go:52] "Watching apiserver" Nov 24 00:33:47.663328 kubelet[3328]: I1124 00:33:47.663263 3328 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 00:33:47.736473 kubelet[3328]: I1124 00:33:47.736412 3328 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-18" Nov 24 00:33:47.737326 kubelet[3328]: I1124 00:33:47.737249 3328 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-18" Nov 24 00:33:47.753089 kubelet[3328]: E1124 00:33:47.753049 3328 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-18\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-18" Nov 24 00:33:47.755143 kubelet[3328]: E1124 00:33:47.754984 3328 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-18\" already exists" pod="kube-system/kube-scheduler-ip-172-31-20-18" Nov 24 00:33:47.811467 kubelet[3328]: I1124 00:33:47.811173 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-18" podStartSLOduration=1.8111527120000002 podStartE2EDuration="1.811152712s" podCreationTimestamp="2025-11-24 00:33:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:33:47.810957222 +0000 UTC m=+1.330554869" watchObservedRunningTime="2025-11-24 00:33:47.811152712 +0000 UTC m=+1.330750358" Nov 24 00:33:47.811467 kubelet[3328]: I1124 00:33:47.811296 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-18" podStartSLOduration=1.811290365 podStartE2EDuration="1.811290365s" podCreationTimestamp="2025-11-24 00:33:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:33:47.790649399 +0000 UTC m=+1.310247049" watchObservedRunningTime="2025-11-24 00:33:47.811290365 +0000 UTC m=+1.330888012" Nov 24 00:33:47.830965 kubelet[3328]: I1124 00:33:47.830590 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-18" podStartSLOduration=1.830571153 podStartE2EDuration="1.830571153s" podCreationTimestamp="2025-11-24 00:33:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:33:47.830316611 +0000 UTC m=+1.349914258" watchObservedRunningTime="2025-11-24 00:33:47.830571153 +0000 UTC m=+1.350168803" Nov 24 00:33:48.447300 update_engine[1966]: I20251124 00:33:48.447185 1966 update_attempter.cc:509] Updating boot flags... Nov 24 00:33:51.505420 kubelet[3328]: I1124 00:33:51.505376 3328 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 24 00:33:51.506746 containerd[1991]: time="2025-11-24T00:33:51.506713489Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 24 00:33:51.508745 kubelet[3328]: I1124 00:33:51.507429 3328 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 24 00:33:52.318895 systemd[1]: Created slice kubepods-besteffort-poda756dc72_e310_4aee_b0eb_30bf9e181768.slice - libcontainer container kubepods-besteffort-poda756dc72_e310_4aee_b0eb_30bf9e181768.slice. Nov 24 00:33:52.419920 kubelet[3328]: I1124 00:33:52.419669 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a756dc72-e310-4aee-b0eb-30bf9e181768-kube-proxy\") pod \"kube-proxy-jscrc\" (UID: \"a756dc72-e310-4aee-b0eb-30bf9e181768\") " pod="kube-system/kube-proxy-jscrc" Nov 24 00:33:52.419920 kubelet[3328]: I1124 00:33:52.419726 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a756dc72-e310-4aee-b0eb-30bf9e181768-xtables-lock\") pod \"kube-proxy-jscrc\" (UID: \"a756dc72-e310-4aee-b0eb-30bf9e181768\") " pod="kube-system/kube-proxy-jscrc" Nov 24 00:33:52.419920 kubelet[3328]: I1124 00:33:52.419745 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a756dc72-e310-4aee-b0eb-30bf9e181768-lib-modules\") pod \"kube-proxy-jscrc\" (UID: \"a756dc72-e310-4aee-b0eb-30bf9e181768\") " pod="kube-system/kube-proxy-jscrc" Nov 24 00:33:52.419920 kubelet[3328]: I1124 00:33:52.419774 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bndcm\" (UniqueName: \"kubernetes.io/projected/a756dc72-e310-4aee-b0eb-30bf9e181768-kube-api-access-bndcm\") pod \"kube-proxy-jscrc\" (UID: \"a756dc72-e310-4aee-b0eb-30bf9e181768\") " pod="kube-system/kube-proxy-jscrc" Nov 24 00:33:52.574653 kubelet[3328]: I1124 00:33:52.574440 3328 status_manager.go:890] "Failed to get status for pod" podUID="e4d79e30-c62d-4a6e-a66e-1610f16cfa49" pod="tigera-operator/tigera-operator-7dcd859c48-rltg6" err="pods \"tigera-operator-7dcd859c48-rltg6\" is forbidden: User \"system:node:ip-172-31-20-18\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ip-172-31-20-18' and this object" Nov 24 00:33:52.576183 systemd[1]: Created slice kubepods-besteffort-pode4d79e30_c62d_4a6e_a66e_1610f16cfa49.slice - libcontainer container kubepods-besteffort-pode4d79e30_c62d_4a6e_a66e_1610f16cfa49.slice. Nov 24 00:33:52.620893 kubelet[3328]: I1124 00:33:52.620847 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnspf\" (UniqueName: \"kubernetes.io/projected/e4d79e30-c62d-4a6e-a66e-1610f16cfa49-kube-api-access-xnspf\") pod \"tigera-operator-7dcd859c48-rltg6\" (UID: \"e4d79e30-c62d-4a6e-a66e-1610f16cfa49\") " pod="tigera-operator/tigera-operator-7dcd859c48-rltg6" Nov 24 00:33:52.620893 kubelet[3328]: I1124 00:33:52.620896 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e4d79e30-c62d-4a6e-a66e-1610f16cfa49-var-lib-calico\") pod \"tigera-operator-7dcd859c48-rltg6\" (UID: \"e4d79e30-c62d-4a6e-a66e-1610f16cfa49\") " pod="tigera-operator/tigera-operator-7dcd859c48-rltg6" Nov 24 00:33:52.628839 containerd[1991]: time="2025-11-24T00:33:52.628797037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jscrc,Uid:a756dc72-e310-4aee-b0eb-30bf9e181768,Namespace:kube-system,Attempt:0,}" Nov 24 00:33:52.664559 containerd[1991]: time="2025-11-24T00:33:52.663608806Z" level=info msg="connecting to shim 94cc6d49dc53738f3d7baca8dbf25f0538239d9ad14c2bb0abdb186d87052265" address="unix:///run/containerd/s/d8393b086d7fb5433a58440926dcaf4f68fe296c14a99ac89c3b1745a55c87ff" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:33:52.700738 systemd[1]: Started cri-containerd-94cc6d49dc53738f3d7baca8dbf25f0538239d9ad14c2bb0abdb186d87052265.scope - libcontainer container 94cc6d49dc53738f3d7baca8dbf25f0538239d9ad14c2bb0abdb186d87052265. Nov 24 00:33:52.737883 containerd[1991]: time="2025-11-24T00:33:52.737825916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jscrc,Uid:a756dc72-e310-4aee-b0eb-30bf9e181768,Namespace:kube-system,Attempt:0,} returns sandbox id \"94cc6d49dc53738f3d7baca8dbf25f0538239d9ad14c2bb0abdb186d87052265\"" Nov 24 00:33:52.743106 containerd[1991]: time="2025-11-24T00:33:52.743062165Z" level=info msg="CreateContainer within sandbox \"94cc6d49dc53738f3d7baca8dbf25f0538239d9ad14c2bb0abdb186d87052265\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 24 00:33:52.803023 containerd[1991]: time="2025-11-24T00:33:52.802980474Z" level=info msg="Container 78bfcb8c261672dcbf34d9ad5a9026114aaeeab38202341d52c07f49242b457f: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:33:52.823943 containerd[1991]: time="2025-11-24T00:33:52.823791629Z" level=info msg="CreateContainer within sandbox \"94cc6d49dc53738f3d7baca8dbf25f0538239d9ad14c2bb0abdb186d87052265\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"78bfcb8c261672dcbf34d9ad5a9026114aaeeab38202341d52c07f49242b457f\"" Nov 24 00:33:52.824959 containerd[1991]: time="2025-11-24T00:33:52.824690932Z" level=info msg="StartContainer for \"78bfcb8c261672dcbf34d9ad5a9026114aaeeab38202341d52c07f49242b457f\"" Nov 24 00:33:52.828068 containerd[1991]: time="2025-11-24T00:33:52.828019408Z" level=info msg="connecting to shim 78bfcb8c261672dcbf34d9ad5a9026114aaeeab38202341d52c07f49242b457f" address="unix:///run/containerd/s/d8393b086d7fb5433a58440926dcaf4f68fe296c14a99ac89c3b1745a55c87ff" protocol=ttrpc version=3 Nov 24 00:33:52.859742 systemd[1]: Started cri-containerd-78bfcb8c261672dcbf34d9ad5a9026114aaeeab38202341d52c07f49242b457f.scope - libcontainer container 78bfcb8c261672dcbf34d9ad5a9026114aaeeab38202341d52c07f49242b457f. Nov 24 00:33:52.882548 containerd[1991]: time="2025-11-24T00:33:52.882490721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-rltg6,Uid:e4d79e30-c62d-4a6e-a66e-1610f16cfa49,Namespace:tigera-operator,Attempt:0,}" Nov 24 00:33:52.939477 containerd[1991]: time="2025-11-24T00:33:52.939175015Z" level=info msg="connecting to shim d700776aa0c8d87f7c48622f003cefefa7ac9bb2b23639553b40cb7852dba4f7" address="unix:///run/containerd/s/1eb6a4323d03d4838111bf8a3738f700b4171dc26d4a84e35c9f248c5a6c642b" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:33:52.963403 containerd[1991]: time="2025-11-24T00:33:52.963350173Z" level=info msg="StartContainer for \"78bfcb8c261672dcbf34d9ad5a9026114aaeeab38202341d52c07f49242b457f\" returns successfully" Nov 24 00:33:52.990842 systemd[1]: Started cri-containerd-d700776aa0c8d87f7c48622f003cefefa7ac9bb2b23639553b40cb7852dba4f7.scope - libcontainer container d700776aa0c8d87f7c48622f003cefefa7ac9bb2b23639553b40cb7852dba4f7. Nov 24 00:33:53.062085 containerd[1991]: time="2025-11-24T00:33:53.061972752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-rltg6,Uid:e4d79e30-c62d-4a6e-a66e-1610f16cfa49,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d700776aa0c8d87f7c48622f003cefefa7ac9bb2b23639553b40cb7852dba4f7\"" Nov 24 00:33:53.069201 containerd[1991]: time="2025-11-24T00:33:53.068989603Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 24 00:33:53.545878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1581730595.mount: Deactivated successfully. Nov 24 00:33:54.499830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount981039129.mount: Deactivated successfully. Nov 24 00:33:55.737148 containerd[1991]: time="2025-11-24T00:33:55.737087691Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:33:55.738288 containerd[1991]: time="2025-11-24T00:33:55.738142046Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 24 00:33:55.739531 containerd[1991]: time="2025-11-24T00:33:55.739469158Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:33:55.744209 containerd[1991]: time="2025-11-24T00:33:55.744138983Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:33:55.745096 containerd[1991]: time="2025-11-24T00:33:55.744805844Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.675770942s" Nov 24 00:33:55.745096 containerd[1991]: time="2025-11-24T00:33:55.744847347Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 24 00:33:55.747967 containerd[1991]: time="2025-11-24T00:33:55.747689483Z" level=info msg="CreateContainer within sandbox \"d700776aa0c8d87f7c48622f003cefefa7ac9bb2b23639553b40cb7852dba4f7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 24 00:33:55.761195 containerd[1991]: time="2025-11-24T00:33:55.761150496Z" level=info msg="Container 7cba9fd52567e7135a7008cee589361b7d132caef6976a8d5a4e780c61c1d698: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:33:55.773803 containerd[1991]: time="2025-11-24T00:33:55.773757716Z" level=info msg="CreateContainer within sandbox \"d700776aa0c8d87f7c48622f003cefefa7ac9bb2b23639553b40cb7852dba4f7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7cba9fd52567e7135a7008cee589361b7d132caef6976a8d5a4e780c61c1d698\"" Nov 24 00:33:55.774425 containerd[1991]: time="2025-11-24T00:33:55.774386437Z" level=info msg="StartContainer for \"7cba9fd52567e7135a7008cee589361b7d132caef6976a8d5a4e780c61c1d698\"" Nov 24 00:33:55.777205 containerd[1991]: time="2025-11-24T00:33:55.777169323Z" level=info msg="connecting to shim 7cba9fd52567e7135a7008cee589361b7d132caef6976a8d5a4e780c61c1d698" address="unix:///run/containerd/s/1eb6a4323d03d4838111bf8a3738f700b4171dc26d4a84e35c9f248c5a6c642b" protocol=ttrpc version=3 Nov 24 00:33:55.824741 systemd[1]: Started cri-containerd-7cba9fd52567e7135a7008cee589361b7d132caef6976a8d5a4e780c61c1d698.scope - libcontainer container 7cba9fd52567e7135a7008cee589361b7d132caef6976a8d5a4e780c61c1d698. Nov 24 00:33:55.865789 containerd[1991]: time="2025-11-24T00:33:55.865745891Z" level=info msg="StartContainer for \"7cba9fd52567e7135a7008cee589361b7d132caef6976a8d5a4e780c61c1d698\" returns successfully" Nov 24 00:33:56.698356 kubelet[3328]: I1124 00:33:56.698178 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jscrc" podStartSLOduration=4.698156045 podStartE2EDuration="4.698156045s" podCreationTimestamp="2025-11-24 00:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:33:53.839353877 +0000 UTC m=+7.358951518" watchObservedRunningTime="2025-11-24 00:33:56.698156045 +0000 UTC m=+10.217753709" Nov 24 00:33:57.237973 kubelet[3328]: I1124 00:33:57.237894 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-rltg6" podStartSLOduration=2.5577249010000003 podStartE2EDuration="5.237871362s" podCreationTimestamp="2025-11-24 00:33:52 +0000 UTC" firstStartedPulling="2025-11-24 00:33:53.065868498 +0000 UTC m=+6.585466124" lastFinishedPulling="2025-11-24 00:33:55.746014945 +0000 UTC m=+9.265612585" observedRunningTime="2025-11-24 00:33:56.83843091 +0000 UTC m=+10.358028558" watchObservedRunningTime="2025-11-24 00:33:57.237871362 +0000 UTC m=+10.757469002" Nov 24 00:34:03.669395 sudo[2382]: pam_unix(sudo:session): session closed for user root Nov 24 00:34:03.693130 sshd[2381]: Connection closed by 139.178.89.65 port 51484 Nov 24 00:34:03.695202 sshd-session[2376]: pam_unix(sshd:session): session closed for user core Nov 24 00:34:03.702652 systemd-logind[1963]: Session 9 logged out. Waiting for processes to exit. Nov 24 00:34:03.704485 systemd[1]: sshd@8-172.31.20.18:22-139.178.89.65:51484.service: Deactivated successfully. Nov 24 00:34:03.712824 systemd[1]: session-9.scope: Deactivated successfully. Nov 24 00:34:03.714535 systemd[1]: session-9.scope: Consumed 4.832s CPU time, 156.7M memory peak. Nov 24 00:34:03.719214 systemd-logind[1963]: Removed session 9. Nov 24 00:34:10.886295 systemd[1]: Created slice kubepods-besteffort-podc3d79330_8b78_493c_802e_cbc6a960a643.slice - libcontainer container kubepods-besteffort-podc3d79330_8b78_493c_802e_cbc6a960a643.slice. Nov 24 00:34:10.912468 kubelet[3328]: I1124 00:34:10.912418 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c3d79330-8b78-493c-802e-cbc6a960a643-typha-certs\") pod \"calico-typha-74748d8f4-tmmmh\" (UID: \"c3d79330-8b78-493c-802e-cbc6a960a643\") " pod="calico-system/calico-typha-74748d8f4-tmmmh" Nov 24 00:34:10.912994 kubelet[3328]: I1124 00:34:10.912495 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3d79330-8b78-493c-802e-cbc6a960a643-tigera-ca-bundle\") pod \"calico-typha-74748d8f4-tmmmh\" (UID: \"c3d79330-8b78-493c-802e-cbc6a960a643\") " pod="calico-system/calico-typha-74748d8f4-tmmmh" Nov 24 00:34:10.912994 kubelet[3328]: I1124 00:34:10.912529 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x597v\" (UniqueName: \"kubernetes.io/projected/c3d79330-8b78-493c-802e-cbc6a960a643-kube-api-access-x597v\") pod \"calico-typha-74748d8f4-tmmmh\" (UID: \"c3d79330-8b78-493c-802e-cbc6a960a643\") " pod="calico-system/calico-typha-74748d8f4-tmmmh" Nov 24 00:34:11.119504 systemd[1]: Created slice kubepods-besteffort-pod78b66b47_7d96_4aaa_a442_aa78a57d8f31.slice - libcontainer container kubepods-besteffort-pod78b66b47_7d96_4aaa_a442_aa78a57d8f31.slice. Nov 24 00:34:11.201012 containerd[1991]: time="2025-11-24T00:34:11.200835236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74748d8f4-tmmmh,Uid:c3d79330-8b78-493c-802e-cbc6a960a643,Namespace:calico-system,Attempt:0,}" Nov 24 00:34:11.214775 kubelet[3328]: I1124 00:34:11.214728 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/78b66b47-7d96-4aaa-a442-aa78a57d8f31-cni-log-dir\") pod \"calico-node-7gd9l\" (UID: \"78b66b47-7d96-4aaa-a442-aa78a57d8f31\") " pod="calico-system/calico-node-7gd9l" Nov 24 00:34:11.214938 kubelet[3328]: I1124 00:34:11.214791 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/78b66b47-7d96-4aaa-a442-aa78a57d8f31-flexvol-driver-host\") pod \"calico-node-7gd9l\" (UID: \"78b66b47-7d96-4aaa-a442-aa78a57d8f31\") " pod="calico-system/calico-node-7gd9l" Nov 24 00:34:11.214938 kubelet[3328]: I1124 00:34:11.214815 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78b66b47-7d96-4aaa-a442-aa78a57d8f31-tigera-ca-bundle\") pod \"calico-node-7gd9l\" (UID: \"78b66b47-7d96-4aaa-a442-aa78a57d8f31\") " pod="calico-system/calico-node-7gd9l" Nov 24 00:34:11.214938 kubelet[3328]: I1124 00:34:11.214838 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78b66b47-7d96-4aaa-a442-aa78a57d8f31-xtables-lock\") pod \"calico-node-7gd9l\" (UID: \"78b66b47-7d96-4aaa-a442-aa78a57d8f31\") " pod="calico-system/calico-node-7gd9l" Nov 24 00:34:11.214938 kubelet[3328]: I1124 00:34:11.214874 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/78b66b47-7d96-4aaa-a442-aa78a57d8f31-cni-bin-dir\") pod \"calico-node-7gd9l\" (UID: \"78b66b47-7d96-4aaa-a442-aa78a57d8f31\") " pod="calico-system/calico-node-7gd9l" Nov 24 00:34:11.214938 kubelet[3328]: I1124 00:34:11.214899 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78b66b47-7d96-4aaa-a442-aa78a57d8f31-lib-modules\") pod \"calico-node-7gd9l\" (UID: \"78b66b47-7d96-4aaa-a442-aa78a57d8f31\") " pod="calico-system/calico-node-7gd9l" Nov 24 00:34:11.215155 kubelet[3328]: I1124 00:34:11.214919 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/78b66b47-7d96-4aaa-a442-aa78a57d8f31-var-lib-calico\") pod \"calico-node-7gd9l\" (UID: \"78b66b47-7d96-4aaa-a442-aa78a57d8f31\") " pod="calico-system/calico-node-7gd9l" Nov 24 00:34:11.215155 kubelet[3328]: I1124 00:34:11.214942 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/78b66b47-7d96-4aaa-a442-aa78a57d8f31-policysync\") pod \"calico-node-7gd9l\" (UID: \"78b66b47-7d96-4aaa-a442-aa78a57d8f31\") " pod="calico-system/calico-node-7gd9l" Nov 24 00:34:11.215155 kubelet[3328]: I1124 00:34:11.214965 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6npfb\" (UniqueName: \"kubernetes.io/projected/78b66b47-7d96-4aaa-a442-aa78a57d8f31-kube-api-access-6npfb\") pod \"calico-node-7gd9l\" (UID: \"78b66b47-7d96-4aaa-a442-aa78a57d8f31\") " pod="calico-system/calico-node-7gd9l" Nov 24 00:34:11.215155 kubelet[3328]: I1124 00:34:11.214992 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/78b66b47-7d96-4aaa-a442-aa78a57d8f31-cni-net-dir\") pod \"calico-node-7gd9l\" (UID: \"78b66b47-7d96-4aaa-a442-aa78a57d8f31\") " pod="calico-system/calico-node-7gd9l" Nov 24 00:34:11.215155 kubelet[3328]: I1124 00:34:11.215014 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/78b66b47-7d96-4aaa-a442-aa78a57d8f31-node-certs\") pod \"calico-node-7gd9l\" (UID: \"78b66b47-7d96-4aaa-a442-aa78a57d8f31\") " pod="calico-system/calico-node-7gd9l" Nov 24 00:34:11.215365 kubelet[3328]: I1124 00:34:11.215039 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/78b66b47-7d96-4aaa-a442-aa78a57d8f31-var-run-calico\") pod \"calico-node-7gd9l\" (UID: \"78b66b47-7d96-4aaa-a442-aa78a57d8f31\") " pod="calico-system/calico-node-7gd9l" Nov 24 00:34:11.249788 containerd[1991]: time="2025-11-24T00:34:11.249444600Z" level=info msg="connecting to shim 2e0851e4c7a721446b62671c626025ed5732dea72bad99424ac0745cdf5f15b0" address="unix:///run/containerd/s/92bc3cc53b006a8017d9834802f527fa30a3bca16bef926b46f91ebb00a02405" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:34:11.304823 systemd[1]: Started cri-containerd-2e0851e4c7a721446b62671c626025ed5732dea72bad99424ac0745cdf5f15b0.scope - libcontainer container 2e0851e4c7a721446b62671c626025ed5732dea72bad99424ac0745cdf5f15b0. Nov 24 00:34:11.321290 kubelet[3328]: E1124 00:34:11.321173 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.321290 kubelet[3328]: W1124 00:34:11.321201 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.322565 kubelet[3328]: E1124 00:34:11.322210 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.322565 kubelet[3328]: E1124 00:34:11.322550 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.322670 kubelet[3328]: W1124 00:34:11.322568 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.322734 kubelet[3328]: E1124 00:34:11.322590 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.337776 kubelet[3328]: E1124 00:34:11.337731 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-48vsh" podUID="96ab7330-0514-4b4d-8ac0-0b3305cdbb91" Nov 24 00:34:11.342776 kubelet[3328]: E1124 00:34:11.342750 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.343008 kubelet[3328]: W1124 00:34:11.342929 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.343008 kubelet[3328]: E1124 00:34:11.342964 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.356704 kubelet[3328]: E1124 00:34:11.356395 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.356704 kubelet[3328]: W1124 00:34:11.356500 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.356704 kubelet[3328]: E1124 00:34:11.356528 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.400598 kubelet[3328]: E1124 00:34:11.400556 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.400598 kubelet[3328]: W1124 00:34:11.400587 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.401798 kubelet[3328]: E1124 00:34:11.400612 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.401798 kubelet[3328]: E1124 00:34:11.401791 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.401901 kubelet[3328]: W1124 00:34:11.401809 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.401901 kubelet[3328]: E1124 00:34:11.401847 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.402167 kubelet[3328]: E1124 00:34:11.402136 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.402167 kubelet[3328]: W1124 00:34:11.402166 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.402277 kubelet[3328]: E1124 00:34:11.402183 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.402654 kubelet[3328]: E1124 00:34:11.402506 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.402654 kubelet[3328]: W1124 00:34:11.402522 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.402654 kubelet[3328]: E1124 00:34:11.402539 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.402934 kubelet[3328]: E1124 00:34:11.402921 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.403463 kubelet[3328]: W1124 00:34:11.403001 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.403463 kubelet[3328]: E1124 00:34:11.403019 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.403715 kubelet[3328]: E1124 00:34:11.403702 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.403908 kubelet[3328]: W1124 00:34:11.403799 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.403908 kubelet[3328]: E1124 00:34:11.403818 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.404611 kubelet[3328]: E1124 00:34:11.404596 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.404807 kubelet[3328]: W1124 00:34:11.404696 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.404807 kubelet[3328]: E1124 00:34:11.404715 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.405223 kubelet[3328]: E1124 00:34:11.405011 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.405223 kubelet[3328]: W1124 00:34:11.405023 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.405223 kubelet[3328]: E1124 00:34:11.405036 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.406618 kubelet[3328]: E1124 00:34:11.405546 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.406618 kubelet[3328]: W1124 00:34:11.406489 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.406618 kubelet[3328]: E1124 00:34:11.406512 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.406910 kubelet[3328]: E1124 00:34:11.406899 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.407085 kubelet[3328]: W1124 00:34:11.406978 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.407085 kubelet[3328]: E1124 00:34:11.406997 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.407304 kubelet[3328]: E1124 00:34:11.407293 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.407472 kubelet[3328]: W1124 00:34:11.407366 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.407472 kubelet[3328]: E1124 00:34:11.407382 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.407856 kubelet[3328]: E1124 00:34:11.407708 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.407856 kubelet[3328]: W1124 00:34:11.407722 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.407856 kubelet[3328]: E1124 00:34:11.407736 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.409464 kubelet[3328]: E1124 00:34:11.408160 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.409674 kubelet[3328]: W1124 00:34:11.409548 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.409674 kubelet[3328]: E1124 00:34:11.409570 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.409915 kubelet[3328]: E1124 00:34:11.409903 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.410086 kubelet[3328]: W1124 00:34:11.409987 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.410086 kubelet[3328]: E1124 00:34:11.410006 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.410369 kubelet[3328]: E1124 00:34:11.410297 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.410369 kubelet[3328]: W1124 00:34:11.410309 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.410369 kubelet[3328]: E1124 00:34:11.410321 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.410780 kubelet[3328]: E1124 00:34:11.410693 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.410780 kubelet[3328]: W1124 00:34:11.410705 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.410780 kubelet[3328]: E1124 00:34:11.410718 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.411192 kubelet[3328]: E1124 00:34:11.411056 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.411192 kubelet[3328]: W1124 00:34:11.411067 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.411192 kubelet[3328]: E1124 00:34:11.411079 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.411639 kubelet[3328]: E1124 00:34:11.411482 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.411639 kubelet[3328]: W1124 00:34:11.411493 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.411639 kubelet[3328]: E1124 00:34:11.411506 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.412741 kubelet[3328]: E1124 00:34:11.412663 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.412741 kubelet[3328]: W1124 00:34:11.412677 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.412741 kubelet[3328]: E1124 00:34:11.412690 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.413136 kubelet[3328]: E1124 00:34:11.413050 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.413136 kubelet[3328]: W1124 00:34:11.413063 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.413136 kubelet[3328]: E1124 00:34:11.413076 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.418502 kubelet[3328]: E1124 00:34:11.417689 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.418833 kubelet[3328]: W1124 00:34:11.418672 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.418833 kubelet[3328]: E1124 00:34:11.418708 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.418833 kubelet[3328]: I1124 00:34:11.418747 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/96ab7330-0514-4b4d-8ac0-0b3305cdbb91-socket-dir\") pod \"csi-node-driver-48vsh\" (UID: \"96ab7330-0514-4b4d-8ac0-0b3305cdbb91\") " pod="calico-system/csi-node-driver-48vsh" Nov 24 00:34:11.419307 kubelet[3328]: E1124 00:34:11.419267 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.419307 kubelet[3328]: W1124 00:34:11.419285 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.420722 kubelet[3328]: E1124 00:34:11.420559 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.420722 kubelet[3328]: W1124 00:34:11.420580 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.420722 kubelet[3328]: E1124 00:34:11.420600 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.420937 kubelet[3328]: E1124 00:34:11.419443 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.421026 kubelet[3328]: I1124 00:34:11.421012 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/96ab7330-0514-4b4d-8ac0-0b3305cdbb91-varrun\") pod \"csi-node-driver-48vsh\" (UID: \"96ab7330-0514-4b4d-8ac0-0b3305cdbb91\") " pod="calico-system/csi-node-driver-48vsh" Nov 24 00:34:11.421194 kubelet[3328]: E1124 00:34:11.421146 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.421194 kubelet[3328]: W1124 00:34:11.421159 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.421194 kubelet[3328]: E1124 00:34:11.421174 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.421684 kubelet[3328]: E1124 00:34:11.421652 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.421684 kubelet[3328]: W1124 00:34:11.421666 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.422612 kubelet[3328]: E1124 00:34:11.422492 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.422829 kubelet[3328]: E1124 00:34:11.422814 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.422932 kubelet[3328]: W1124 00:34:11.422900 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.423157 kubelet[3328]: E1124 00:34:11.423115 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.423467 kubelet[3328]: E1124 00:34:11.423262 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.423607 kubelet[3328]: W1124 00:34:11.423565 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.423607 kubelet[3328]: E1124 00:34:11.423588 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.424039 kubelet[3328]: I1124 00:34:11.423962 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/96ab7330-0514-4b4d-8ac0-0b3305cdbb91-kubelet-dir\") pod \"csi-node-driver-48vsh\" (UID: \"96ab7330-0514-4b4d-8ac0-0b3305cdbb91\") " pod="calico-system/csi-node-driver-48vsh" Nov 24 00:34:11.425993 kubelet[3328]: E1124 00:34:11.425976 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.426223 kubelet[3328]: W1124 00:34:11.426090 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.426223 kubelet[3328]: E1124 00:34:11.426115 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.426853 containerd[1991]: time="2025-11-24T00:34:11.426578860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7gd9l,Uid:78b66b47-7d96-4aaa-a442-aa78a57d8f31,Namespace:calico-system,Attempt:0,}" Nov 24 00:34:11.426931 kubelet[3328]: E1124 00:34:11.426654 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.426931 kubelet[3328]: W1124 00:34:11.426666 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.426931 kubelet[3328]: E1124 00:34:11.426680 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.427599 kubelet[3328]: E1124 00:34:11.427521 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.428364 kubelet[3328]: W1124 00:34:11.428084 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.428364 kubelet[3328]: E1124 00:34:11.428106 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.428364 kubelet[3328]: I1124 00:34:11.428160 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9whk\" (UniqueName: \"kubernetes.io/projected/96ab7330-0514-4b4d-8ac0-0b3305cdbb91-kube-api-access-q9whk\") pod \"csi-node-driver-48vsh\" (UID: \"96ab7330-0514-4b4d-8ac0-0b3305cdbb91\") " pod="calico-system/csi-node-driver-48vsh" Nov 24 00:34:11.429261 kubelet[3328]: E1124 00:34:11.429040 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.429261 kubelet[3328]: W1124 00:34:11.429059 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.429261 kubelet[3328]: E1124 00:34:11.429074 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.429512 kubelet[3328]: E1124 00:34:11.429500 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.429600 kubelet[3328]: W1124 00:34:11.429581 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.430238 kubelet[3328]: E1124 00:34:11.429661 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.431673 kubelet[3328]: E1124 00:34:11.431621 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.431673 kubelet[3328]: W1124 00:34:11.431638 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.431673 kubelet[3328]: E1124 00:34:11.431653 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.433788 kubelet[3328]: I1124 00:34:11.433513 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/96ab7330-0514-4b4d-8ac0-0b3305cdbb91-registration-dir\") pod \"csi-node-driver-48vsh\" (UID: \"96ab7330-0514-4b4d-8ac0-0b3305cdbb91\") " pod="calico-system/csi-node-driver-48vsh" Nov 24 00:34:11.434168 kubelet[3328]: E1124 00:34:11.434079 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.434168 kubelet[3328]: W1124 00:34:11.434099 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.434168 kubelet[3328]: E1124 00:34:11.434116 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.434692 kubelet[3328]: E1124 00:34:11.434614 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.434692 kubelet[3328]: W1124 00:34:11.434632 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.434692 kubelet[3328]: E1124 00:34:11.434648 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.473866 containerd[1991]: time="2025-11-24T00:34:11.473689120Z" level=info msg="connecting to shim a523f5a7b7286aafa6fb36f37621976918ec595a927f410771989acd69357355" address="unix:///run/containerd/s/30b89262a9421dce61e00876263069dae6f40c3f1c0f621af2737cb6b20b10b5" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:34:11.535006 systemd[1]: Started cri-containerd-a523f5a7b7286aafa6fb36f37621976918ec595a927f410771989acd69357355.scope - libcontainer container a523f5a7b7286aafa6fb36f37621976918ec595a927f410771989acd69357355. Nov 24 00:34:11.535370 kubelet[3328]: E1124 00:34:11.535348 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.535613 kubelet[3328]: W1124 00:34:11.535483 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.535613 kubelet[3328]: E1124 00:34:11.535515 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.536950 kubelet[3328]: E1124 00:34:11.536911 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.536950 kubelet[3328]: W1124 00:34:11.536929 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.537172 kubelet[3328]: E1124 00:34:11.537089 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.537876 kubelet[3328]: E1124 00:34:11.537726 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.537876 kubelet[3328]: W1124 00:34:11.537743 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.537876 kubelet[3328]: E1124 00:34:11.537759 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.538807 kubelet[3328]: E1124 00:34:11.538764 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.538807 kubelet[3328]: W1124 00:34:11.538780 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.539347 kubelet[3328]: E1124 00:34:11.539199 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.539347 kubelet[3328]: W1124 00:34:11.539213 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.539347 kubelet[3328]: E1124 00:34:11.539228 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.541635 kubelet[3328]: E1124 00:34:11.541584 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.541804 kubelet[3328]: E1124 00:34:11.541793 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.541883 kubelet[3328]: W1124 00:34:11.541862 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.542098 kubelet[3328]: E1124 00:34:11.541973 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.542332 kubelet[3328]: E1124 00:34:11.542321 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.542481 kubelet[3328]: W1124 00:34:11.542444 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.542605 kubelet[3328]: E1124 00:34:11.542557 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.543200 kubelet[3328]: E1124 00:34:11.543070 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.543200 kubelet[3328]: W1124 00:34:11.543084 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.543200 kubelet[3328]: E1124 00:34:11.543100 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.543676 kubelet[3328]: E1124 00:34:11.543638 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.543676 kubelet[3328]: W1124 00:34:11.543652 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.543946 kubelet[3328]: E1124 00:34:11.543881 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.544155 kubelet[3328]: E1124 00:34:11.544128 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.544155 kubelet[3328]: W1124 00:34:11.544140 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.544331 kubelet[3328]: E1124 00:34:11.544319 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.544632 kubelet[3328]: E1124 00:34:11.544601 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.544632 kubelet[3328]: W1124 00:34:11.544615 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.544838 kubelet[3328]: E1124 00:34:11.544813 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.545068 kubelet[3328]: E1124 00:34:11.545056 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.545215 kubelet[3328]: W1124 00:34:11.545134 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.545215 kubelet[3328]: E1124 00:34:11.545166 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.545605 kubelet[3328]: E1124 00:34:11.545576 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.545605 kubelet[3328]: W1124 00:34:11.545589 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.547470 kubelet[3328]: E1124 00:34:11.545733 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.547838 kubelet[3328]: E1124 00:34:11.547803 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.547838 kubelet[3328]: W1124 00:34:11.547820 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.548016 kubelet[3328]: E1124 00:34:11.547970 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.548292 kubelet[3328]: E1124 00:34:11.548278 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.548480 kubelet[3328]: W1124 00:34:11.548372 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.548480 kubelet[3328]: E1124 00:34:11.548409 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.548812 kubelet[3328]: E1124 00:34:11.548783 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.548812 kubelet[3328]: W1124 00:34:11.548796 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.548984 kubelet[3328]: E1124 00:34:11.548971 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.549177 kubelet[3328]: E1124 00:34:11.549166 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.549315 kubelet[3328]: W1124 00:34:11.549239 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.549460 kubelet[3328]: E1124 00:34:11.549433 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.549604 kubelet[3328]: E1124 00:34:11.549579 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.549604 kubelet[3328]: W1124 00:34:11.549590 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.549799 kubelet[3328]: E1124 00:34:11.549742 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.549972 kubelet[3328]: E1124 00:34:11.549947 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.549972 kubelet[3328]: W1124 00:34:11.549958 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.550162 kubelet[3328]: E1124 00:34:11.550081 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.550553 kubelet[3328]: E1124 00:34:11.550367 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.550553 kubelet[3328]: W1124 00:34:11.550379 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.550553 kubelet[3328]: E1124 00:34:11.550391 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.551686 kubelet[3328]: E1124 00:34:11.551671 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.554850 kubelet[3328]: W1124 00:34:11.554536 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.554850 kubelet[3328]: E1124 00:34:11.554571 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.555593 kubelet[3328]: E1124 00:34:11.555569 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.555593 kubelet[3328]: W1124 00:34:11.555593 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.555720 kubelet[3328]: E1124 00:34:11.555615 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.555874 kubelet[3328]: E1124 00:34:11.555859 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.555936 kubelet[3328]: W1124 00:34:11.555875 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.555936 kubelet[3328]: E1124 00:34:11.555891 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.558469 kubelet[3328]: E1124 00:34:11.557545 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.558469 kubelet[3328]: W1124 00:34:11.557562 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.558469 kubelet[3328]: E1124 00:34:11.557580 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.558469 kubelet[3328]: E1124 00:34:11.557832 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.558469 kubelet[3328]: W1124 00:34:11.557842 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.558469 kubelet[3328]: E1124 00:34:11.557854 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.560558 kubelet[3328]: E1124 00:34:11.560538 3328 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:34:11.560666 kubelet[3328]: W1124 00:34:11.560653 3328 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:34:11.560733 kubelet[3328]: E1124 00:34:11.560722 3328 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:34:11.636443 containerd[1991]: time="2025-11-24T00:34:11.636376685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7gd9l,Uid:78b66b47-7d96-4aaa-a442-aa78a57d8f31,Namespace:calico-system,Attempt:0,} returns sandbox id \"a523f5a7b7286aafa6fb36f37621976918ec595a927f410771989acd69357355\"" Nov 24 00:34:11.639289 containerd[1991]: time="2025-11-24T00:34:11.639247813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 24 00:34:11.740050 containerd[1991]: time="2025-11-24T00:34:11.690722190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74748d8f4-tmmmh,Uid:c3d79330-8b78-493c-802e-cbc6a960a643,Namespace:calico-system,Attempt:0,} returns sandbox id \"2e0851e4c7a721446b62671c626025ed5732dea72bad99424ac0745cdf5f15b0\"" Nov 24 00:34:13.013851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1847238705.mount: Deactivated successfully. Nov 24 00:34:13.152831 containerd[1991]: time="2025-11-24T00:34:13.152787753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:34:13.154761 containerd[1991]: time="2025-11-24T00:34:13.154693320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Nov 24 00:34:13.159340 containerd[1991]: time="2025-11-24T00:34:13.157489514Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:34:13.165258 containerd[1991]: time="2025-11-24T00:34:13.165211490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:34:13.167301 containerd[1991]: time="2025-11-24T00:34:13.167234963Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.52793657s" Nov 24 00:34:13.167576 containerd[1991]: time="2025-11-24T00:34:13.167551395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 24 00:34:13.180750 containerd[1991]: time="2025-11-24T00:34:13.180613425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 24 00:34:13.183379 containerd[1991]: time="2025-11-24T00:34:13.183339118Z" level=info msg="CreateContainer within sandbox \"a523f5a7b7286aafa6fb36f37621976918ec595a927f410771989acd69357355\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 24 00:34:13.231475 containerd[1991]: time="2025-11-24T00:34:13.230428786Z" level=info msg="Container 3e53cfe78de092ce97a3880dab92f3367880875efb5a9e23c721758b2c19e341: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:34:13.242141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1006933328.mount: Deactivated successfully. Nov 24 00:34:13.258978 containerd[1991]: time="2025-11-24T00:34:13.258920299Z" level=info msg="CreateContainer within sandbox \"a523f5a7b7286aafa6fb36f37621976918ec595a927f410771989acd69357355\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3e53cfe78de092ce97a3880dab92f3367880875efb5a9e23c721758b2c19e341\"" Nov 24 00:34:13.260836 containerd[1991]: time="2025-11-24T00:34:13.260793371Z" level=info msg="StartContainer for \"3e53cfe78de092ce97a3880dab92f3367880875efb5a9e23c721758b2c19e341\"" Nov 24 00:34:13.264673 containerd[1991]: time="2025-11-24T00:34:13.264512286Z" level=info msg="connecting to shim 3e53cfe78de092ce97a3880dab92f3367880875efb5a9e23c721758b2c19e341" address="unix:///run/containerd/s/30b89262a9421dce61e00876263069dae6f40c3f1c0f621af2737cb6b20b10b5" protocol=ttrpc version=3 Nov 24 00:34:13.318815 systemd[1]: Started cri-containerd-3e53cfe78de092ce97a3880dab92f3367880875efb5a9e23c721758b2c19e341.scope - libcontainer container 3e53cfe78de092ce97a3880dab92f3367880875efb5a9e23c721758b2c19e341. Nov 24 00:34:13.461257 containerd[1991]: time="2025-11-24T00:34:13.461192592Z" level=info msg="StartContainer for \"3e53cfe78de092ce97a3880dab92f3367880875efb5a9e23c721758b2c19e341\" returns successfully" Nov 24 00:34:13.487706 systemd[1]: cri-containerd-3e53cfe78de092ce97a3880dab92f3367880875efb5a9e23c721758b2c19e341.scope: Deactivated successfully. Nov 24 00:34:13.523162 containerd[1991]: time="2025-11-24T00:34:13.522853276Z" level=info msg="received container exit event container_id:\"3e53cfe78de092ce97a3880dab92f3367880875efb5a9e23c721758b2c19e341\" id:\"3e53cfe78de092ce97a3880dab92f3367880875efb5a9e23c721758b2c19e341\" pid:4193 exited_at:{seconds:1763944453 nanos:493082790}" Nov 24 00:34:13.556313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e53cfe78de092ce97a3880dab92f3367880875efb5a9e23c721758b2c19e341-rootfs.mount: Deactivated successfully. Nov 24 00:34:13.670823 kubelet[3328]: E1124 00:34:13.670199 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-48vsh" podUID="96ab7330-0514-4b4d-8ac0-0b3305cdbb91" Nov 24 00:34:15.285396 containerd[1991]: time="2025-11-24T00:34:15.285292315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:34:15.289341 containerd[1991]: time="2025-11-24T00:34:15.289296638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Nov 24 00:34:15.291935 containerd[1991]: time="2025-11-24T00:34:15.291577826Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:34:15.297849 containerd[1991]: time="2025-11-24T00:34:15.297785581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:34:15.299800 containerd[1991]: time="2025-11-24T00:34:15.299164352Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.11821913s" Nov 24 00:34:15.299800 containerd[1991]: time="2025-11-24T00:34:15.299801509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 24 00:34:15.302282 containerd[1991]: time="2025-11-24T00:34:15.302246628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 24 00:34:15.330544 containerd[1991]: time="2025-11-24T00:34:15.330439491Z" level=info msg="CreateContainer within sandbox \"2e0851e4c7a721446b62671c626025ed5732dea72bad99424ac0745cdf5f15b0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 24 00:34:15.346825 containerd[1991]: time="2025-11-24T00:34:15.346759025Z" level=info msg="Container 4b02540d252ee04760219363e0b1f810fc4fa35364050c6d0393861cdec2d9ea: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:34:15.393830 containerd[1991]: time="2025-11-24T00:34:15.393753042Z" level=info msg="CreateContainer within sandbox \"2e0851e4c7a721446b62671c626025ed5732dea72bad99424ac0745cdf5f15b0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4b02540d252ee04760219363e0b1f810fc4fa35364050c6d0393861cdec2d9ea\"" Nov 24 00:34:15.396444 containerd[1991]: time="2025-11-24T00:34:15.395919401Z" level=info msg="StartContainer for \"4b02540d252ee04760219363e0b1f810fc4fa35364050c6d0393861cdec2d9ea\"" Nov 24 00:34:15.399357 containerd[1991]: time="2025-11-24T00:34:15.399308733Z" level=info msg="connecting to shim 4b02540d252ee04760219363e0b1f810fc4fa35364050c6d0393861cdec2d9ea" address="unix:///run/containerd/s/92bc3cc53b006a8017d9834802f527fa30a3bca16bef926b46f91ebb00a02405" protocol=ttrpc version=3 Nov 24 00:34:15.432713 systemd[1]: Started cri-containerd-4b02540d252ee04760219363e0b1f810fc4fa35364050c6d0393861cdec2d9ea.scope - libcontainer container 4b02540d252ee04760219363e0b1f810fc4fa35364050c6d0393861cdec2d9ea. Nov 24 00:34:15.522548 containerd[1991]: time="2025-11-24T00:34:15.522504723Z" level=info msg="StartContainer for \"4b02540d252ee04760219363e0b1f810fc4fa35364050c6d0393861cdec2d9ea\" returns successfully" Nov 24 00:34:15.669204 kubelet[3328]: E1124 00:34:15.669056 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-48vsh" podUID="96ab7330-0514-4b4d-8ac0-0b3305cdbb91" Nov 24 00:34:15.964941 kubelet[3328]: I1124 00:34:15.963697 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-74748d8f4-tmmmh" podStartSLOduration=2.406690765 podStartE2EDuration="5.963675479s" podCreationTimestamp="2025-11-24 00:34:10 +0000 UTC" firstStartedPulling="2025-11-24 00:34:11.744640112 +0000 UTC m=+25.264237750" lastFinishedPulling="2025-11-24 00:34:15.301624837 +0000 UTC m=+28.821222464" observedRunningTime="2025-11-24 00:34:15.963403939 +0000 UTC m=+29.483001587" watchObservedRunningTime="2025-11-24 00:34:15.963675479 +0000 UTC m=+29.483273128" Nov 24 00:34:16.915146 kubelet[3328]: I1124 00:34:16.915108 3328 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:34:17.669471 kubelet[3328]: E1124 00:34:17.669219 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-48vsh" podUID="96ab7330-0514-4b4d-8ac0-0b3305cdbb91" Nov 24 00:34:19.355533 containerd[1991]: time="2025-11-24T00:34:19.355481303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:34:19.357996 containerd[1991]: time="2025-11-24T00:34:19.357953472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 24 00:34:19.360553 containerd[1991]: time="2025-11-24T00:34:19.360513103Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:34:19.365802 containerd[1991]: time="2025-11-24T00:34:19.364912788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:34:19.367351 containerd[1991]: time="2025-11-24T00:34:19.367301578Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.06501328s" Nov 24 00:34:19.367549 containerd[1991]: time="2025-11-24T00:34:19.367527929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 24 00:34:19.374148 containerd[1991]: time="2025-11-24T00:34:19.374101557Z" level=info msg="CreateContainer within sandbox \"a523f5a7b7286aafa6fb36f37621976918ec595a927f410771989acd69357355\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 24 00:34:19.396482 containerd[1991]: time="2025-11-24T00:34:19.392783958Z" level=info msg="Container 5f233c2faa5e2d148ecebbc2faa7eede777944cc2bd42903d113908ff1d965b4: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:34:19.417024 containerd[1991]: time="2025-11-24T00:34:19.416959749Z" level=info msg="CreateContainer within sandbox \"a523f5a7b7286aafa6fb36f37621976918ec595a927f410771989acd69357355\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5f233c2faa5e2d148ecebbc2faa7eede777944cc2bd42903d113908ff1d965b4\"" Nov 24 00:34:19.417959 containerd[1991]: time="2025-11-24T00:34:19.417593474Z" level=info msg="StartContainer for \"5f233c2faa5e2d148ecebbc2faa7eede777944cc2bd42903d113908ff1d965b4\"" Nov 24 00:34:19.420086 containerd[1991]: time="2025-11-24T00:34:19.420028698Z" level=info msg="connecting to shim 5f233c2faa5e2d148ecebbc2faa7eede777944cc2bd42903d113908ff1d965b4" address="unix:///run/containerd/s/30b89262a9421dce61e00876263069dae6f40c3f1c0f621af2737cb6b20b10b5" protocol=ttrpc version=3 Nov 24 00:34:19.480020 systemd[1]: Started cri-containerd-5f233c2faa5e2d148ecebbc2faa7eede777944cc2bd42903d113908ff1d965b4.scope - libcontainer container 5f233c2faa5e2d148ecebbc2faa7eede777944cc2bd42903d113908ff1d965b4. Nov 24 00:34:19.576285 containerd[1991]: time="2025-11-24T00:34:19.576151818Z" level=info msg="StartContainer for \"5f233c2faa5e2d148ecebbc2faa7eede777944cc2bd42903d113908ff1d965b4\" returns successfully" Nov 24 00:34:19.670073 kubelet[3328]: E1124 00:34:19.669921 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-48vsh" podUID="96ab7330-0514-4b4d-8ac0-0b3305cdbb91" Nov 24 00:34:20.384898 systemd[1]: cri-containerd-5f233c2faa5e2d148ecebbc2faa7eede777944cc2bd42903d113908ff1d965b4.scope: Deactivated successfully. Nov 24 00:34:20.385258 systemd[1]: cri-containerd-5f233c2faa5e2d148ecebbc2faa7eede777944cc2bd42903d113908ff1d965b4.scope: Consumed 633ms CPU time, 162.9M memory peak, 10.1M read from disk, 171.3M written to disk. Nov 24 00:34:20.474507 kubelet[3328]: I1124 00:34:20.474143 3328 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 24 00:34:20.521374 containerd[1991]: time="2025-11-24T00:34:20.521324459Z" level=info msg="received container exit event container_id:\"5f233c2faa5e2d148ecebbc2faa7eede777944cc2bd42903d113908ff1d965b4\" id:\"5f233c2faa5e2d148ecebbc2faa7eede777944cc2bd42903d113908ff1d965b4\" pid:4295 exited_at:{seconds:1763944460 nanos:499045084}" Nov 24 00:34:20.612567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f233c2faa5e2d148ecebbc2faa7eede777944cc2bd42903d113908ff1d965b4-rootfs.mount: Deactivated successfully. Nov 24 00:34:20.665529 systemd[1]: Created slice kubepods-burstable-pod8a4c302a_8212_492a_9e7d_eb959684ea88.slice - libcontainer container kubepods-burstable-pod8a4c302a_8212_492a_9e7d_eb959684ea88.slice. Nov 24 00:34:20.671895 kubelet[3328]: I1124 00:34:20.671707 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a4c302a-8212-492a-9e7d-eb959684ea88-config-volume\") pod \"coredns-668d6bf9bc-t4847\" (UID: \"8a4c302a-8212-492a-9e7d-eb959684ea88\") " pod="kube-system/coredns-668d6bf9bc-t4847" Nov 24 00:34:20.677975 kubelet[3328]: I1124 00:34:20.677577 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-769nd\" (UniqueName: \"kubernetes.io/projected/6d264742-bc12-4821-8aea-351233494ad9-kube-api-access-769nd\") pod \"calico-kube-controllers-68b9c8d87-7ndft\" (UID: \"6d264742-bc12-4821-8aea-351233494ad9\") " pod="calico-system/calico-kube-controllers-68b9c8d87-7ndft" Nov 24 00:34:20.681566 kubelet[3328]: I1124 00:34:20.678033 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd87m\" (UniqueName: \"kubernetes.io/projected/db56e410-2d47-4d1e-affe-99108edb5a98-kube-api-access-hd87m\") pod \"coredns-668d6bf9bc-gltt9\" (UID: \"db56e410-2d47-4d1e-affe-99108edb5a98\") " pod="kube-system/coredns-668d6bf9bc-gltt9" Nov 24 00:34:20.681566 kubelet[3328]: I1124 00:34:20.678066 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3787437e-985f-4539-b2f2-cf4084ce8482-calico-apiserver-certs\") pod \"calico-apiserver-6594b85c5f-dnbvv\" (UID: \"3787437e-985f-4539-b2f2-cf4084ce8482\") " pod="calico-apiserver/calico-apiserver-6594b85c5f-dnbvv" Nov 24 00:34:20.681566 kubelet[3328]: I1124 00:34:20.678140 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxvrb\" (UniqueName: \"kubernetes.io/projected/e9762179-b934-433c-b377-36b45fd610b8-kube-api-access-gxvrb\") pod \"calico-apiserver-6594b85c5f-5gwgt\" (UID: \"e9762179-b934-433c-b377-36b45fd610b8\") " pod="calico-apiserver/calico-apiserver-6594b85c5f-5gwgt" Nov 24 00:34:20.681566 kubelet[3328]: I1124 00:34:20.678165 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/620a2f0b-085b-4117-b2fa-5478c2d5ea1b-whisker-ca-bundle\") pod \"whisker-7d96f4c566-mm6p8\" (UID: \"620a2f0b-085b-4117-b2fa-5478c2d5ea1b\") " pod="calico-system/whisker-7d96f4c566-mm6p8" Nov 24 00:34:20.681566 kubelet[3328]: I1124 00:34:20.678208 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b03eefe9-3009-42ea-814c-37b36b40aa2b-config\") pod \"goldmane-666569f655-744x5\" (UID: \"b03eefe9-3009-42ea-814c-37b36b40aa2b\") " pod="calico-system/goldmane-666569f655-744x5" Nov 24 00:34:20.681852 kubelet[3328]: I1124 00:34:20.678236 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwkcp\" (UniqueName: \"kubernetes.io/projected/8a4c302a-8212-492a-9e7d-eb959684ea88-kube-api-access-bwkcp\") pod \"coredns-668d6bf9bc-t4847\" (UID: \"8a4c302a-8212-492a-9e7d-eb959684ea88\") " pod="kube-system/coredns-668d6bf9bc-t4847" Nov 24 00:34:20.681852 kubelet[3328]: I1124 00:34:20.678263 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d264742-bc12-4821-8aea-351233494ad9-tigera-ca-bundle\") pod \"calico-kube-controllers-68b9c8d87-7ndft\" (UID: \"6d264742-bc12-4821-8aea-351233494ad9\") " pod="calico-system/calico-kube-controllers-68b9c8d87-7ndft" Nov 24 00:34:20.681617 systemd[1]: Created slice kubepods-besteffort-pod620a2f0b_085b_4117_b2fa_5478c2d5ea1b.slice - libcontainer container kubepods-besteffort-pod620a2f0b_085b_4117_b2fa_5478c2d5ea1b.slice. Nov 24 00:34:20.683479 kubelet[3328]: I1124 00:34:20.682981 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/620a2f0b-085b-4117-b2fa-5478c2d5ea1b-whisker-backend-key-pair\") pod \"whisker-7d96f4c566-mm6p8\" (UID: \"620a2f0b-085b-4117-b2fa-5478c2d5ea1b\") " pod="calico-system/whisker-7d96f4c566-mm6p8" Nov 24 00:34:20.685576 kubelet[3328]: I1124 00:34:20.685549 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clmzg\" (UniqueName: \"kubernetes.io/projected/620a2f0b-085b-4117-b2fa-5478c2d5ea1b-kube-api-access-clmzg\") pod \"whisker-7d96f4c566-mm6p8\" (UID: \"620a2f0b-085b-4117-b2fa-5478c2d5ea1b\") " pod="calico-system/whisker-7d96f4c566-mm6p8" Nov 24 00:34:20.686544 kubelet[3328]: I1124 00:34:20.686001 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b03eefe9-3009-42ea-814c-37b36b40aa2b-goldmane-key-pair\") pod \"goldmane-666569f655-744x5\" (UID: \"b03eefe9-3009-42ea-814c-37b36b40aa2b\") " pod="calico-system/goldmane-666569f655-744x5" Nov 24 00:34:20.686544 kubelet[3328]: I1124 00:34:20.686038 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b03eefe9-3009-42ea-814c-37b36b40aa2b-goldmane-ca-bundle\") pod \"goldmane-666569f655-744x5\" (UID: \"b03eefe9-3009-42ea-814c-37b36b40aa2b\") " pod="calico-system/goldmane-666569f655-744x5" Nov 24 00:34:20.686544 kubelet[3328]: I1124 00:34:20.686508 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sptth\" (UniqueName: \"kubernetes.io/projected/3787437e-985f-4539-b2f2-cf4084ce8482-kube-api-access-sptth\") pod \"calico-apiserver-6594b85c5f-dnbvv\" (UID: \"3787437e-985f-4539-b2f2-cf4084ce8482\") " pod="calico-apiserver/calico-apiserver-6594b85c5f-dnbvv" Nov 24 00:34:20.688106 kubelet[3328]: I1124 00:34:20.686720 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw5xd\" (UniqueName: \"kubernetes.io/projected/b03eefe9-3009-42ea-814c-37b36b40aa2b-kube-api-access-dw5xd\") pod \"goldmane-666569f655-744x5\" (UID: \"b03eefe9-3009-42ea-814c-37b36b40aa2b\") " pod="calico-system/goldmane-666569f655-744x5" Nov 24 00:34:20.688106 kubelet[3328]: I1124 00:34:20.686765 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db56e410-2d47-4d1e-affe-99108edb5a98-config-volume\") pod \"coredns-668d6bf9bc-gltt9\" (UID: \"db56e410-2d47-4d1e-affe-99108edb5a98\") " pod="kube-system/coredns-668d6bf9bc-gltt9" Nov 24 00:34:20.688106 kubelet[3328]: I1124 00:34:20.686784 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e9762179-b934-433c-b377-36b45fd610b8-calico-apiserver-certs\") pod \"calico-apiserver-6594b85c5f-5gwgt\" (UID: \"e9762179-b934-433c-b377-36b45fd610b8\") " pod="calico-apiserver/calico-apiserver-6594b85c5f-5gwgt" Nov 24 00:34:20.702041 systemd[1]: Created slice kubepods-besteffort-podb03eefe9_3009_42ea_814c_37b36b40aa2b.slice - libcontainer container kubepods-besteffort-podb03eefe9_3009_42ea_814c_37b36b40aa2b.slice. Nov 24 00:34:20.715022 systemd[1]: Created slice kubepods-besteffort-pode9762179_b934_433c_b377_36b45fd610b8.slice - libcontainer container kubepods-besteffort-pode9762179_b934_433c_b377_36b45fd610b8.slice. Nov 24 00:34:20.725113 systemd[1]: Created slice kubepods-besteffort-pod6d264742_bc12_4821_8aea_351233494ad9.slice - libcontainer container kubepods-besteffort-pod6d264742_bc12_4821_8aea_351233494ad9.slice. Nov 24 00:34:20.736770 systemd[1]: Created slice kubepods-burstable-poddb56e410_2d47_4d1e_affe_99108edb5a98.slice - libcontainer container kubepods-burstable-poddb56e410_2d47_4d1e_affe_99108edb5a98.slice. Nov 24 00:34:20.747875 systemd[1]: Created slice kubepods-besteffort-pod3787437e_985f_4539_b2f2_cf4084ce8482.slice - libcontainer container kubepods-besteffort-pod3787437e_985f_4539_b2f2_cf4084ce8482.slice. Nov 24 00:34:20.960423 containerd[1991]: time="2025-11-24T00:34:20.959272887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 24 00:34:20.989701 containerd[1991]: time="2025-11-24T00:34:20.989634335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t4847,Uid:8a4c302a-8212-492a-9e7d-eb959684ea88,Namespace:kube-system,Attempt:0,}" Nov 24 00:34:20.992432 containerd[1991]: time="2025-11-24T00:34:20.991762785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d96f4c566-mm6p8,Uid:620a2f0b-085b-4117-b2fa-5478c2d5ea1b,Namespace:calico-system,Attempt:0,}" Nov 24 00:34:21.023216 containerd[1991]: time="2025-11-24T00:34:21.022894845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6594b85c5f-5gwgt,Uid:e9762179-b934-433c-b377-36b45fd610b8,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:34:21.025655 containerd[1991]: time="2025-11-24T00:34:21.025611755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-744x5,Uid:b03eefe9-3009-42ea-814c-37b36b40aa2b,Namespace:calico-system,Attempt:0,}" Nov 24 00:34:21.046572 containerd[1991]: time="2025-11-24T00:34:21.046529041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gltt9,Uid:db56e410-2d47-4d1e-affe-99108edb5a98,Namespace:kube-system,Attempt:0,}" Nov 24 00:34:21.067190 containerd[1991]: time="2025-11-24T00:34:21.066779914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b9c8d87-7ndft,Uid:6d264742-bc12-4821-8aea-351233494ad9,Namespace:calico-system,Attempt:0,}" Nov 24 00:34:21.086963 containerd[1991]: time="2025-11-24T00:34:21.086920562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6594b85c5f-dnbvv,Uid:3787437e-985f-4539-b2f2-cf4084ce8482,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:34:21.420010 containerd[1991]: time="2025-11-24T00:34:21.419946216Z" level=error msg="Failed to destroy network for sandbox \"bc8e6b521c6e01103f85fb0b9528a67976fd17600cb80fedf40c4cd10ba27923\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:34:21.442313 containerd[1991]: time="2025-11-24T00:34:21.442242955Z" level=error msg="Failed to destroy network for sandbox \"48e808110a4a7044df4ffa19654c2305f7e918ce19737f7d33279834d89d8584\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:34:21.490079 containerd[1991]: time="2025-11-24T00:34:21.455022525Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t4847,Uid:8a4c302a-8212-492a-9e7d-eb959684ea88,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc8e6b521c6e01103f85fb0b9528a67976fd17600cb80fedf40c4cd10ba27923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:34:21.505354 containerd[1991]: time="2025-11-24T00:34:21.505286626Z" level=error msg="Failed to destroy network for sandbox \"829f554cdd92f63ec19a23124f94ff584ab72ed92585556367aa3f118dab06da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:34:21.516213 containerd[1991]: time="2025-11-24T00:34:21.516074139Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b9c8d87-7ndft,Uid:6d264742-bc12-4821-8aea-351233494ad9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"829f554cdd92f63ec19a23124f94ff584ab72ed92585556367aa3f118dab06da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:34:21.555516 containerd[1991]: time="2025-11-24T00:34:21.456598416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-744x5,Uid:b03eefe9-3009-42ea-814c-37b36b40aa2b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"48e808110a4a7044df4ffa19654c2305f7e918ce19737f7d33279834d89d8584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:34:21.555516 containerd[1991]: time="2025-11-24T00:34:21.555313534Z" level=error msg="Failed to destroy network for sandbox \"715a94599c0a7d69bfeea44ab458532ca2719e9ceb62da2b0f58c8efda5811ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:34:21.559533 containerd[1991]: time="2025-11-24T00:34:21.558104031Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6594b85c5f-5gwgt,Uid:e9762179-b934-433c-b377-36b45fd610b8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"715a94599c0a7d69bfeea44ab458532ca2719e9ceb62da2b0f58c8efda5811ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:34:21.559998 containerd[1991]: time="2025-11-24T00:34:21.559963562Z" level=error msg="Failed to destroy network for sandbox \"2aa73b172a1bfea3f15b2e0da226c9637a59ed5fda8f165dc60712ba0f6ea1cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:34:21.561359 containerd[1991]: time="2025-11-24T00:34:21.561213281Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gltt9,Uid:db56e410-2d47-4d1e-affe-99108edb5a98,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aa73b172a1bfea3f15b2e0da226c9637a59ed5fda8f165dc60712ba0f6ea1cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:34:21.578693 kubelet[3328]: E1124 00:34:21.578599 3328 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc8e6b521c6e01103f85fb0b9528a67976fd17600cb80fedf40c4cd10ba27923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:34:21.578967 kubelet[3328]: E1124 00:34:21.578929 3328 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"829f554cdd92f63ec19a23124f94ff584ab72ed92585556367aa3f118dab06da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:34:21.579181 containerd[1991]: time="2025-11-24T00:34:21.579117202Z" level=error msg="Failed to destroy network for sandbox \"99506656e31d16827b30c9480683b583fcd598529b33a383061d52f86b92057b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:34:21.583742 containerd[1991]: time="2025-11-24T00:34:21.583372829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d96f4c566-mm6p8,Uid:620a2f0b-085b-4117-b2fa-5478c2d5ea1b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"99506656e31d16827b30c9480683b583fcd598529b33a383061d52f86b92057b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:34:21.587619 kubelet[3328]: E1124 00:34:21.586068 3328 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"829f554cdd92f63ec19a23124f94ff584ab72ed92585556367aa3f118dab06da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68b9c8d87-7ndft" Nov 24 00:34:21.587619 kubelet[3328]: E1124 00:34:21.586315 3328 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"829f554cdd92f63ec19a23124f94ff584ab72ed92585556367aa3f118dab06da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68b9c8d87-7ndft" Nov 24 00:34:21.587619 kubelet[3328]: E1124 00:34:21.586406 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68b9c8d87-7ndft_calico-system(6d264742-bc12-4821-8aea-351233494ad9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68b9c8d87-7ndft_calico-system(6d264742-bc12-4821-8aea-351233494ad9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"829f554cdd92f63ec19a23124f94ff584ab72ed92585556367aa3f118dab06da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68b9c8d87-7ndft" podUID="6d264742-bc12-4821-8aea-351233494ad9" Nov 24 00:34:21.588731 kubelet[3328]: E1124 00:34:21.586503 3328 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99506656e31d16827b30c9480683b583fcd598529b33a383061d52f86b92057b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:34:21.588731 kubelet[3328]: E1124 00:34:21.586549 3328 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99506656e31d16827b30c9480683b583fcd598529b33a383061d52f86b92057b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d96f4c566-mm6p8" Nov 24 00:34:21.588731 kubelet[3328]: E1124 00:34:21.586572 3328 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99506656e31d16827b30c9480683b583fcd598529b33a383061d52f86b92057b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d96f4c566-mm6p8" Nov 24 00:34:21.588880 kubelet[3328]: E1124 00:34:21.586799 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7d96f4c566-mm6p8_calico-system(620a2f0b-085b-4117-b2fa-5478c2d5ea1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7d96f4c566-mm6p8_calico-system(620a2f0b-085b-4117-b2fa-5478c2d5ea1b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99506656e31d16827b30c9480683b583fcd598529b33a383061d52f86b92057b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d96f4c566-mm6p8" podUID="620a2f0b-085b-4117-b2fa-5478c2d5ea1b" Nov 24 00:34:21.588880 kubelet[3328]: E1124 00:34:21.586154 3328 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aa73b172a1bfea3f15b2e0da226c9637a59ed5fda8f165dc60712ba0f6ea1cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:34:21.588880 kubelet[3328]: E1124 00:34:21.586881 3328 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aa73b172a1bfea3f15b2e0da226c9637a59ed5fda8f165dc60712ba0f6ea1cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gltt9" Nov 24 00:34:21.589046 kubelet[3328]: E1124 00:34:21.586935 3328 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aa73b172a1bfea3f15b2e0da226c9637a59ed5fda8f165dc60712ba0f6ea1cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gltt9" Nov 24 00:34:21.589046 kubelet[3328]: E1124 00:34:21.587114 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-gltt9_kube-system(db56e410-2d47-4d1e-affe-99108edb5a98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-gltt9_kube-system(db56e410-2d47-4d1e-affe-99108edb5a98)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2aa73b172a1bfea3f15b2e0da226c9637a59ed5fda8f165dc60712ba0f6ea1cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-gltt9" podUID="db56e410-2d47-4d1e-affe-99108edb5a98" Nov 24 00:34:21.589046 kubelet[3328]: E1124 00:34:21.587165 3328 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48e808110a4a7044df4ffa19654c2305f7e918ce19737f7d33279834d89d8584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:34:21.589202 kubelet[3328]: E1124 00:34:21.587215 3328 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48e808110a4a7044df4ffa19654c2305f7e918ce19737f7d33279834d89d8584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-744x5" Nov 24 00:34:21.589202 kubelet[3328]: E1124 00:34:21.587366 3328 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48e808110a4a7044df4ffa19654c2305f7e918ce19737f7d33279834d89d8584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-744x5" Nov 24 00:34:21.589202 kubelet[3328]: E1124 00:34:21.586186 3328 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc8e6b521c6e01103f85fb0b9528a67976fd17600cb80fedf40c4cd10ba27923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t4847" Nov 24 00:34:21.589202 kubelet[3328]: E1124 00:34:21.587435 3328 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc8e6b521c6e01103f85fb0b9528a67976fd17600cb80fedf40c4cd10ba27923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t4847" Nov 24 00:34:21.589375 kubelet[3328]: E1124 00:34:21.587512 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t4847_kube-system(8a4c302a-8212-492a-9e7d-eb959684ea88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t4847_kube-system(8a4c302a-8212-492a-9e7d-eb959684ea88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc8e6b521c6e01103f85fb0b9528a67976fd17600cb80fedf40c4cd10ba27923\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t4847" podUID="8a4c302a-8212-492a-9e7d-eb959684ea88" Nov 24 00:34:21.589375 kubelet[3328]: E1124 00:34:21.587326 3328 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"715a94599c0a7d69bfeea44ab458532ca2719e9ceb62da2b0f58c8efda5811ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:34:21.589375 kubelet[3328]: E1124 00:34:21.587992 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-744x5_calico-system(b03eefe9-3009-42ea-814c-37b36b40aa2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-744x5_calico-system(b03eefe9-3009-42ea-814c-37b36b40aa2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48e808110a4a7044df4ffa19654c2305f7e918ce19737f7d33279834d89d8584\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-744x5" podUID="b03eefe9-3009-42ea-814c-37b36b40aa2b" Nov 24 00:34:21.591281 kubelet[3328]: E1124 00:34:21.587553 3328 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"715a94599c0a7d69bfeea44ab458532ca2719e9ceb62da2b0f58c8efda5811ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6594b85c5f-5gwgt" Nov 24 00:34:21.591281 kubelet[3328]: E1124 00:34:21.588148 3328 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"715a94599c0a7d69bfeea44ab458532ca2719e9ceb62da2b0f58c8efda5811ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6594b85c5f-5gwgt" Nov 24 00:34:21.591281 kubelet[3328]: E1124 00:34:21.588299 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6594b85c5f-5gwgt_calico-apiserver(e9762179-b934-433c-b377-36b45fd610b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6594b85c5f-5gwgt_calico-apiserver(e9762179-b934-433c-b377-36b45fd610b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"715a94599c0a7d69bfeea44ab458532ca2719e9ceb62da2b0f58c8efda5811ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6594b85c5f-5gwgt" podUID="e9762179-b934-433c-b377-36b45fd610b8" Nov 24 00:34:21.592284 containerd[1991]: time="2025-11-24T00:34:21.592223230Z" level=error msg="Failed to destroy network for sandbox \"5c01d6528f46de888f0c2899fe7dca17a3ec4e5d0407f0a579778cab29f130f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:34:21.593960 containerd[1991]: time="2025-11-24T00:34:21.593905026Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6594b85c5f-dnbvv,Uid:3787437e-985f-4539-b2f2-cf4084ce8482,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c01d6528f46de888f0c2899fe7dca17a3ec4e5d0407f0a579778cab29f130f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:34:21.594258 kubelet[3328]: E1124 00:34:21.594218 3328 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c01d6528f46de888f0c2899fe7dca17a3ec4e5d0407f0a579778cab29f130f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:34:21.594344 kubelet[3328]: E1124 00:34:21.594288 3328 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c01d6528f46de888f0c2899fe7dca17a3ec4e5d0407f0a579778cab29f130f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6594b85c5f-dnbvv" Nov 24 00:34:21.594344 kubelet[3328]: E1124 00:34:21.594325 3328 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c01d6528f46de888f0c2899fe7dca17a3ec4e5d0407f0a579778cab29f130f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6594b85c5f-dnbvv" Nov 24 00:34:21.594437 kubelet[3328]: E1124 00:34:21.594380 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6594b85c5f-dnbvv_calico-apiserver(3787437e-985f-4539-b2f2-cf4084ce8482)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6594b85c5f-dnbvv_calico-apiserver(3787437e-985f-4539-b2f2-cf4084ce8482)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c01d6528f46de888f0c2899fe7dca17a3ec4e5d0407f0a579778cab29f130f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6594b85c5f-dnbvv" podUID="3787437e-985f-4539-b2f2-cf4084ce8482" Nov 24 00:34:21.699733 systemd[1]: Created slice kubepods-besteffort-pod96ab7330_0514_4b4d_8ac0_0b3305cdbb91.slice - libcontainer container kubepods-besteffort-pod96ab7330_0514_4b4d_8ac0_0b3305cdbb91.slice. Nov 24 00:34:21.703697 containerd[1991]: time="2025-11-24T00:34:21.703652584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-48vsh,Uid:96ab7330-0514-4b4d-8ac0-0b3305cdbb91,Namespace:calico-system,Attempt:0,}" Nov 24 00:34:21.776052 containerd[1991]: time="2025-11-24T00:34:21.776003979Z" level=error msg="Failed to destroy network for sandbox \"3c88ac7a1f98b20895ee172025ce2e9347d5cc99a6fa4c12554d984dd082edc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:34:21.779631 containerd[1991]: time="2025-11-24T00:34:21.779523162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-48vsh,Uid:96ab7330-0514-4b4d-8ac0-0b3305cdbb91,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c88ac7a1f98b20895ee172025ce2e9347d5cc99a6fa4c12554d984dd082edc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:34:21.780335 kubelet[3328]: E1124 00:34:21.780291 3328 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c88ac7a1f98b20895ee172025ce2e9347d5cc99a6fa4c12554d984dd082edc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:34:21.781195 kubelet[3328]: E1124 00:34:21.780541 3328 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c88ac7a1f98b20895ee172025ce2e9347d5cc99a6fa4c12554d984dd082edc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-48vsh" Nov 24 00:34:21.781195 kubelet[3328]: E1124 00:34:21.780577 3328 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c88ac7a1f98b20895ee172025ce2e9347d5cc99a6fa4c12554d984dd082edc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-48vsh" Nov 24 00:34:21.781195 kubelet[3328]: E1124 00:34:21.780635 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-48vsh_calico-system(96ab7330-0514-4b4d-8ac0-0b3305cdbb91)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-48vsh_calico-system(96ab7330-0514-4b4d-8ac0-0b3305cdbb91)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c88ac7a1f98b20895ee172025ce2e9347d5cc99a6fa4c12554d984dd082edc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-48vsh" podUID="96ab7330-0514-4b4d-8ac0-0b3305cdbb91" Nov 24 00:34:21.781988 systemd[1]: run-netns-cni\x2d604374e3\x2d469b\x2d9935\x2d3de8\x2d4c14188f3029.mount: Deactivated successfully. Nov 24 00:34:25.630259 kubelet[3328]: I1124 00:34:25.630204 3328 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:34:27.302313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1806300366.mount: Deactivated successfully. Nov 24 00:34:27.408429 containerd[1991]: time="2025-11-24T00:34:27.396738865Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:34:27.408981 containerd[1991]: time="2025-11-24T00:34:27.399729894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 24 00:34:27.430694 containerd[1991]: time="2025-11-24T00:34:27.429400793Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:34:27.430694 containerd[1991]: time="2025-11-24T00:34:27.430114667Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.469478341s" Nov 24 00:34:27.430694 containerd[1991]: time="2025-11-24T00:34:27.430588664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 24 00:34:27.431005 containerd[1991]: time="2025-11-24T00:34:27.430921864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:34:27.474409 containerd[1991]: time="2025-11-24T00:34:27.474354147Z" level=info msg="CreateContainer within sandbox \"a523f5a7b7286aafa6fb36f37621976918ec595a927f410771989acd69357355\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 24 00:34:27.546663 containerd[1991]: time="2025-11-24T00:34:27.546618099Z" level=info msg="Container 0d9d14d6b577cdcd069bd57278d942d82645c0c824655c60e1f652eebd7d5bcf: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:34:27.547576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount934267674.mount: Deactivated successfully. Nov 24 00:34:27.605694 containerd[1991]: time="2025-11-24T00:34:27.605557116Z" level=info msg="CreateContainer within sandbox \"a523f5a7b7286aafa6fb36f37621976918ec595a927f410771989acd69357355\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0d9d14d6b577cdcd069bd57278d942d82645c0c824655c60e1f652eebd7d5bcf\"" Nov 24 00:34:27.608501 containerd[1991]: time="2025-11-24T00:34:27.608462340Z" level=info msg="StartContainer for \"0d9d14d6b577cdcd069bd57278d942d82645c0c824655c60e1f652eebd7d5bcf\"" Nov 24 00:34:27.620246 containerd[1991]: time="2025-11-24T00:34:27.620191205Z" level=info msg="connecting to shim 0d9d14d6b577cdcd069bd57278d942d82645c0c824655c60e1f652eebd7d5bcf" address="unix:///run/containerd/s/30b89262a9421dce61e00876263069dae6f40c3f1c0f621af2737cb6b20b10b5" protocol=ttrpc version=3 Nov 24 00:34:27.675823 systemd[1]: Started cri-containerd-0d9d14d6b577cdcd069bd57278d942d82645c0c824655c60e1f652eebd7d5bcf.scope - libcontainer container 0d9d14d6b577cdcd069bd57278d942d82645c0c824655c60e1f652eebd7d5bcf. Nov 24 00:34:27.784269 containerd[1991]: time="2025-11-24T00:34:27.784224860Z" level=info msg="StartContainer for \"0d9d14d6b577cdcd069bd57278d942d82645c0c824655c60e1f652eebd7d5bcf\" returns successfully" Nov 24 00:34:28.039659 kubelet[3328]: I1124 00:34:28.039602 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7gd9l" podStartSLOduration=1.246089937 podStartE2EDuration="17.039577489s" podCreationTimestamp="2025-11-24 00:34:11 +0000 UTC" firstStartedPulling="2025-11-24 00:34:11.63868677 +0000 UTC m=+25.158284399" lastFinishedPulling="2025-11-24 00:34:27.432174328 +0000 UTC m=+40.951771951" observedRunningTime="2025-11-24 00:34:28.039513123 +0000 UTC m=+41.559110769" watchObservedRunningTime="2025-11-24 00:34:28.039577489 +0000 UTC m=+41.559175131" Nov 24 00:34:28.173345 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 24 00:34:28.173626 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 24 00:34:28.675273 kubelet[3328]: I1124 00:34:28.675233 3328 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clmzg\" (UniqueName: \"kubernetes.io/projected/620a2f0b-085b-4117-b2fa-5478c2d5ea1b-kube-api-access-clmzg\") pod \"620a2f0b-085b-4117-b2fa-5478c2d5ea1b\" (UID: \"620a2f0b-085b-4117-b2fa-5478c2d5ea1b\") " Nov 24 00:34:28.675720 kubelet[3328]: I1124 00:34:28.675301 3328 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/620a2f0b-085b-4117-b2fa-5478c2d5ea1b-whisker-ca-bundle\") pod \"620a2f0b-085b-4117-b2fa-5478c2d5ea1b\" (UID: \"620a2f0b-085b-4117-b2fa-5478c2d5ea1b\") " Nov 24 00:34:28.675720 kubelet[3328]: I1124 00:34:28.675325 3328 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/620a2f0b-085b-4117-b2fa-5478c2d5ea1b-whisker-backend-key-pair\") pod \"620a2f0b-085b-4117-b2fa-5478c2d5ea1b\" (UID: \"620a2f0b-085b-4117-b2fa-5478c2d5ea1b\") " Nov 24 00:34:28.689008 kubelet[3328]: I1124 00:34:28.688890 3328 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/620a2f0b-085b-4117-b2fa-5478c2d5ea1b-kube-api-access-clmzg" (OuterVolumeSpecName: "kube-api-access-clmzg") pod "620a2f0b-085b-4117-b2fa-5478c2d5ea1b" (UID: "620a2f0b-085b-4117-b2fa-5478c2d5ea1b"). InnerVolumeSpecName "kube-api-access-clmzg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 24 00:34:28.689661 kubelet[3328]: I1124 00:34:28.689621 3328 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/620a2f0b-085b-4117-b2fa-5478c2d5ea1b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "620a2f0b-085b-4117-b2fa-5478c2d5ea1b" (UID: "620a2f0b-085b-4117-b2fa-5478c2d5ea1b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 24 00:34:28.690637 kubelet[3328]: I1124 00:34:28.690577 3328 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620a2f0b-085b-4117-b2fa-5478c2d5ea1b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "620a2f0b-085b-4117-b2fa-5478c2d5ea1b" (UID: "620a2f0b-085b-4117-b2fa-5478c2d5ea1b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 24 00:34:28.692294 systemd[1]: var-lib-kubelet-pods-620a2f0b\x2d085b\x2d4117\x2db2fa\x2d5478c2d5ea1b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dclmzg.mount: Deactivated successfully. Nov 24 00:34:28.693338 systemd[1]: var-lib-kubelet-pods-620a2f0b\x2d085b\x2d4117\x2db2fa\x2d5478c2d5ea1b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 24 00:34:28.776866 kubelet[3328]: I1124 00:34:28.776773 3328 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-clmzg\" (UniqueName: \"kubernetes.io/projected/620a2f0b-085b-4117-b2fa-5478c2d5ea1b-kube-api-access-clmzg\") on node \"ip-172-31-20-18\" DevicePath \"\"" Nov 24 00:34:28.776866 kubelet[3328]: I1124 00:34:28.776817 3328 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/620a2f0b-085b-4117-b2fa-5478c2d5ea1b-whisker-ca-bundle\") on node \"ip-172-31-20-18\" DevicePath \"\"" Nov 24 00:34:28.776866 kubelet[3328]: I1124 00:34:28.776831 3328 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/620a2f0b-085b-4117-b2fa-5478c2d5ea1b-whisker-backend-key-pair\") on node \"ip-172-31-20-18\" DevicePath \"\"" Nov 24 00:34:29.007112 kubelet[3328]: I1124 00:34:29.007067 3328 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:34:29.016886 systemd[1]: Removed slice kubepods-besteffort-pod620a2f0b_085b_4117_b2fa_5478c2d5ea1b.slice - libcontainer container kubepods-besteffort-pod620a2f0b_085b_4117_b2fa_5478c2d5ea1b.slice. Nov 24 00:34:29.174763 systemd[1]: Created slice kubepods-besteffort-pod6d71726d_c6c6_4b4a_9ff1_13f1cf35cfef.slice - libcontainer container kubepods-besteffort-pod6d71726d_c6c6_4b4a_9ff1_13f1cf35cfef.slice. Nov 24 00:34:29.281328 kubelet[3328]: I1124 00:34:29.281001 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef-whisker-ca-bundle\") pod \"whisker-84bf774ffb-kpf6d\" (UID: \"6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef\") " pod="calico-system/whisker-84bf774ffb-kpf6d" Nov 24 00:34:29.281328 kubelet[3328]: I1124 00:34:29.281053 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6n9v\" (UniqueName: \"kubernetes.io/projected/6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef-kube-api-access-f6n9v\") pod \"whisker-84bf774ffb-kpf6d\" (UID: \"6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef\") " pod="calico-system/whisker-84bf774ffb-kpf6d" Nov 24 00:34:29.281328 kubelet[3328]: I1124 00:34:29.281105 3328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef-whisker-backend-key-pair\") pod \"whisker-84bf774ffb-kpf6d\" (UID: \"6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef\") " pod="calico-system/whisker-84bf774ffb-kpf6d" Nov 24 00:34:29.479694 containerd[1991]: time="2025-11-24T00:34:29.479633460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84bf774ffb-kpf6d,Uid:6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef,Namespace:calico-system,Attempt:0,}" Nov 24 00:34:30.139773 (udev-worker)[4599]: Network interface NamePolicy= disabled on kernel command line. Nov 24 00:34:30.149131 systemd-networkd[1850]: cali0554ea1db1e: Link UP Nov 24 00:34:30.149836 systemd-networkd[1850]: cali0554ea1db1e: Gained carrier Nov 24 00:34:30.183351 containerd[1991]: 2025-11-24 00:34:29.513 [INFO][4633] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 00:34:30.183351 containerd[1991]: 2025-11-24 00:34:29.573 [INFO][4633] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--18-k8s-whisker--84bf774ffb--kpf6d-eth0 whisker-84bf774ffb- calico-system 6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef 878 0 2025-11-24 00:34:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:84bf774ffb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-20-18 whisker-84bf774ffb-kpf6d eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0554ea1db1e [] [] }} ContainerID="1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f" Namespace="calico-system" Pod="whisker-84bf774ffb-kpf6d" WorkloadEndpoint="ip--172--31--20--18-k8s-whisker--84bf774ffb--kpf6d-" Nov 24 00:34:30.183351 containerd[1991]: 2025-11-24 00:34:29.573 [INFO][4633] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f" Namespace="calico-system" Pod="whisker-84bf774ffb-kpf6d" WorkloadEndpoint="ip--172--31--20--18-k8s-whisker--84bf774ffb--kpf6d-eth0" Nov 24 00:34:30.183351 containerd[1991]: 2025-11-24 00:34:30.000 [INFO][4644] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f" HandleID="k8s-pod-network.1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f" Workload="ip--172--31--20--18-k8s-whisker--84bf774ffb--kpf6d-eth0" Nov 24 00:34:30.183944 containerd[1991]: 2025-11-24 00:34:30.007 [INFO][4644] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f" HandleID="k8s-pod-network.1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f" Workload="ip--172--31--20--18-k8s-whisker--84bf774ffb--kpf6d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000376740), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-18", "pod":"whisker-84bf774ffb-kpf6d", "timestamp":"2025-11-24 00:34:30.000879462 +0000 UTC"}, Hostname:"ip-172-31-20-18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:34:30.183944 containerd[1991]: 2025-11-24 00:34:30.007 [INFO][4644] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:34:30.183944 containerd[1991]: 2025-11-24 00:34:30.015 [INFO][4644] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:34:30.183944 containerd[1991]: 2025-11-24 00:34:30.022 [INFO][4644] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-18' Nov 24 00:34:30.183944 containerd[1991]: 2025-11-24 00:34:30.056 [INFO][4644] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f" host="ip-172-31-20-18" Nov 24 00:34:30.183944 containerd[1991]: 2025-11-24 00:34:30.073 [INFO][4644] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-18" Nov 24 00:34:30.183944 containerd[1991]: 2025-11-24 00:34:30.081 [INFO][4644] ipam/ipam.go 511: Trying affinity for 192.168.54.192/26 host="ip-172-31-20-18" Nov 24 00:34:30.183944 containerd[1991]: 2025-11-24 00:34:30.086 [INFO][4644] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.192/26 host="ip-172-31-20-18" Nov 24 00:34:30.183944 containerd[1991]: 2025-11-24 00:34:30.091 [INFO][4644] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.192/26 host="ip-172-31-20-18" Nov 24 00:34:30.184312 containerd[1991]: 2025-11-24 00:34:30.091 [INFO][4644] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.192/26 handle="k8s-pod-network.1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f" host="ip-172-31-20-18" Nov 24 00:34:30.184312 containerd[1991]: 2025-11-24 00:34:30.095 [INFO][4644] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f Nov 24 00:34:30.184312 containerd[1991]: 2025-11-24 00:34:30.105 [INFO][4644] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.192/26 handle="k8s-pod-network.1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f" host="ip-172-31-20-18" Nov 24 00:34:30.184312 containerd[1991]: 2025-11-24 00:34:30.115 [INFO][4644] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.193/26] block=192.168.54.192/26 handle="k8s-pod-network.1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f" host="ip-172-31-20-18" Nov 24 00:34:30.184312 containerd[1991]: 2025-11-24 00:34:30.115 [INFO][4644] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.193/26] handle="k8s-pod-network.1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f" host="ip-172-31-20-18" Nov 24 00:34:30.184312 containerd[1991]: 2025-11-24 00:34:30.115 [INFO][4644] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:34:30.184312 containerd[1991]: 2025-11-24 00:34:30.115 [INFO][4644] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.193/26] IPv6=[] ContainerID="1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f" HandleID="k8s-pod-network.1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f" Workload="ip--172--31--20--18-k8s-whisker--84bf774ffb--kpf6d-eth0" Nov 24 00:34:30.189100 containerd[1991]: 2025-11-24 00:34:30.120 [INFO][4633] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f" Namespace="calico-system" Pod="whisker-84bf774ffb-kpf6d" WorkloadEndpoint="ip--172--31--20--18-k8s-whisker--84bf774ffb--kpf6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--18-k8s-whisker--84bf774ffb--kpf6d-eth0", GenerateName:"whisker-84bf774ffb-", Namespace:"calico-system", SelfLink:"", UID:"6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 34, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84bf774ffb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-18", ContainerID:"", Pod:"whisker-84bf774ffb-kpf6d", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.54.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0554ea1db1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:34:30.189100 containerd[1991]: 2025-11-24 00:34:30.121 [INFO][4633] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.193/32] ContainerID="1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f" Namespace="calico-system" Pod="whisker-84bf774ffb-kpf6d" WorkloadEndpoint="ip--172--31--20--18-k8s-whisker--84bf774ffb--kpf6d-eth0" Nov 24 00:34:30.189252 containerd[1991]: 2025-11-24 00:34:30.121 [INFO][4633] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0554ea1db1e ContainerID="1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f" Namespace="calico-system" Pod="whisker-84bf774ffb-kpf6d" WorkloadEndpoint="ip--172--31--20--18-k8s-whisker--84bf774ffb--kpf6d-eth0" Nov 24 00:34:30.189252 containerd[1991]: 2025-11-24 00:34:30.154 [INFO][4633] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f" Namespace="calico-system" Pod="whisker-84bf774ffb-kpf6d" WorkloadEndpoint="ip--172--31--20--18-k8s-whisker--84bf774ffb--kpf6d-eth0" Nov 24 00:34:30.189335 containerd[1991]: 2025-11-24 00:34:30.154 [INFO][4633] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f" Namespace="calico-system" Pod="whisker-84bf774ffb-kpf6d" WorkloadEndpoint="ip--172--31--20--18-k8s-whisker--84bf774ffb--kpf6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--18-k8s-whisker--84bf774ffb--kpf6d-eth0", GenerateName:"whisker-84bf774ffb-", Namespace:"calico-system", SelfLink:"", UID:"6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 34, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84bf774ffb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-18", ContainerID:"1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f", Pod:"whisker-84bf774ffb-kpf6d", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.54.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0554ea1db1e", MAC:"7e:c6:01:48:1a:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:34:30.189428 containerd[1991]: 2025-11-24 00:34:30.174 [INFO][4633] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f" Namespace="calico-system" Pod="whisker-84bf774ffb-kpf6d" WorkloadEndpoint="ip--172--31--20--18-k8s-whisker--84bf774ffb--kpf6d-eth0" Nov 24 00:34:30.567162 containerd[1991]: time="2025-11-24T00:34:30.567034538Z" level=info msg="connecting to shim 1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f" address="unix:///run/containerd/s/a7aa0226670b1f8c2c4549e45b53cc4098e1c3fb6fc002786c51316582269da1" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:34:30.672004 systemd[1]: Started cri-containerd-1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f.scope - libcontainer container 1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f. Nov 24 00:34:30.680259 kubelet[3328]: I1124 00:34:30.679147 3328 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="620a2f0b-085b-4117-b2fa-5478c2d5ea1b" path="/var/lib/kubelet/pods/620a2f0b-085b-4117-b2fa-5478c2d5ea1b/volumes" Nov 24 00:34:30.838700 kubelet[3328]: I1124 00:34:30.837782 3328 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:34:30.883849 containerd[1991]: time="2025-11-24T00:34:30.883799746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84bf774ffb-kpf6d,Uid:6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d8c3b72a468af21733d26781d3da8baf112364e06bfabcc60c748fb9119678f\"" Nov 24 00:34:30.906413 containerd[1991]: time="2025-11-24T00:34:30.905949028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:34:31.238640 containerd[1991]: time="2025-11-24T00:34:31.238590381Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:34:31.244194 containerd[1991]: time="2025-11-24T00:34:31.241145704Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:34:31.244582 containerd[1991]: time="2025-11-24T00:34:31.241507674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:34:31.245347 kubelet[3328]: E1124 00:34:31.245175 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:34:31.245347 kubelet[3328]: E1124 00:34:31.245229 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:34:31.266935 kubelet[3328]: E1124 00:34:31.266709 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b38025872b7d4fa89ea0f1fb92dc0334,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f6n9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84bf774ffb-kpf6d_calico-system(6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:34:31.271229 containerd[1991]: time="2025-11-24T00:34:31.271171090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:34:31.363859 systemd-networkd[1850]: vxlan.calico: Link UP Nov 24 00:34:31.363874 systemd-networkd[1850]: vxlan.calico: Gained carrier Nov 24 00:34:31.421657 (udev-worker)[4603]: Network interface NamePolicy= disabled on kernel command line. Nov 24 00:34:31.553979 containerd[1991]: time="2025-11-24T00:34:31.553370266Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:34:31.556724 containerd[1991]: time="2025-11-24T00:34:31.556672013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:34:31.556724 containerd[1991]: time="2025-11-24T00:34:31.556647044Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:34:31.557218 kubelet[3328]: E1124 00:34:31.557169 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:34:31.557472 kubelet[3328]: E1124 00:34:31.557361 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:34:31.557812 kubelet[3328]: E1124 00:34:31.557695 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f6n9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84bf774ffb-kpf6d_calico-system(6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:34:31.559685 kubelet[3328]: E1124 00:34:31.559612 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84bf774ffb-kpf6d" podUID="6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef" Nov 24 00:34:32.031190 kubelet[3328]: E1124 00:34:32.031039 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84bf774ffb-kpf6d" podUID="6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef" Nov 24 00:34:32.086161 systemd-networkd[1850]: cali0554ea1db1e: Gained IPv6LL Nov 24 00:34:32.405929 systemd-networkd[1850]: vxlan.calico: Gained IPv6LL Nov 24 00:34:34.673437 containerd[1991]: time="2025-11-24T00:34:34.673378242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t4847,Uid:8a4c302a-8212-492a-9e7d-eb959684ea88,Namespace:kube-system,Attempt:0,}" Nov 24 00:34:34.674942 containerd[1991]: time="2025-11-24T00:34:34.673573756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-744x5,Uid:b03eefe9-3009-42ea-814c-37b36b40aa2b,Namespace:calico-system,Attempt:0,}" Nov 24 00:34:34.674942 containerd[1991]: time="2025-11-24T00:34:34.674308553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gltt9,Uid:db56e410-2d47-4d1e-affe-99108edb5a98,Namespace:kube-system,Attempt:0,}" Nov 24 00:34:34.808539 ntpd[2180]: Listen normally on 6 vxlan.calico 192.168.54.192:123 Nov 24 00:34:34.810401 ntpd[2180]: 24 Nov 00:34:34 ntpd[2180]: Listen normally on 6 vxlan.calico 192.168.54.192:123 Nov 24 00:34:34.810401 ntpd[2180]: 24 Nov 00:34:34 ntpd[2180]: Listen normally on 7 cali0554ea1db1e [fe80::ecee:eeff:feee:eeee%4]:123 Nov 24 00:34:34.810401 ntpd[2180]: 24 Nov 00:34:34 ntpd[2180]: Listen normally on 8 vxlan.calico [fe80::6405:85ff:febf:e413%5]:123 Nov 24 00:34:34.808605 ntpd[2180]: Listen normally on 7 cali0554ea1db1e [fe80::ecee:eeff:feee:eeee%4]:123 Nov 24 00:34:34.808636 ntpd[2180]: Listen normally on 8 vxlan.calico [fe80::6405:85ff:febf:e413%5]:123 Nov 24 00:34:34.951260 systemd-networkd[1850]: calic4ad77311c4: Link UP Nov 24 00:34:34.954900 systemd-networkd[1850]: calic4ad77311c4: Gained carrier Nov 24 00:34:34.956356 (udev-worker)[5002]: Network interface NamePolicy= disabled on kernel command line. Nov 24 00:34:35.001188 containerd[1991]: 2025-11-24 00:34:34.812 [INFO][4944] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--18-k8s-goldmane--666569f655--744x5-eth0 goldmane-666569f655- calico-system b03eefe9-3009-42ea-814c-37b36b40aa2b 803 0 2025-11-24 00:34:08 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-20-18 goldmane-666569f655-744x5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic4ad77311c4 [] [] }} ContainerID="8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf" Namespace="calico-system" Pod="goldmane-666569f655-744x5" WorkloadEndpoint="ip--172--31--20--18-k8s-goldmane--666569f655--744x5-" Nov 24 00:34:35.001188 containerd[1991]: 2025-11-24 00:34:34.813 [INFO][4944] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf" Namespace="calico-system" Pod="goldmane-666569f655-744x5" WorkloadEndpoint="ip--172--31--20--18-k8s-goldmane--666569f655--744x5-eth0" Nov 24 00:34:35.001188 containerd[1991]: 2025-11-24 00:34:34.873 [INFO][4982] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf" HandleID="k8s-pod-network.8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf" Workload="ip--172--31--20--18-k8s-goldmane--666569f655--744x5-eth0" Nov 24 00:34:35.002546 containerd[1991]: 2025-11-24 00:34:34.873 [INFO][4982] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf" HandleID="k8s-pod-network.8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf" Workload="ip--172--31--20--18-k8s-goldmane--666569f655--744x5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-18", "pod":"goldmane-666569f655-744x5", "timestamp":"2025-11-24 00:34:34.873606742 +0000 UTC"}, Hostname:"ip-172-31-20-18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:34:35.002546 containerd[1991]: 2025-11-24 00:34:34.874 [INFO][4982] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:34:35.002546 containerd[1991]: 2025-11-24 00:34:34.874 [INFO][4982] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:34:35.002546 containerd[1991]: 2025-11-24 00:34:34.874 [INFO][4982] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-18' Nov 24 00:34:35.002546 containerd[1991]: 2025-11-24 00:34:34.885 [INFO][4982] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf" host="ip-172-31-20-18" Nov 24 00:34:35.002546 containerd[1991]: 2025-11-24 00:34:34.897 [INFO][4982] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-18" Nov 24 00:34:35.002546 containerd[1991]: 2025-11-24 00:34:34.911 [INFO][4982] ipam/ipam.go 511: Trying affinity for 192.168.54.192/26 host="ip-172-31-20-18" Nov 24 00:34:35.002546 containerd[1991]: 2025-11-24 00:34:34.914 [INFO][4982] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.192/26 host="ip-172-31-20-18" Nov 24 00:34:35.002546 containerd[1991]: 2025-11-24 00:34:34.917 [INFO][4982] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.192/26 host="ip-172-31-20-18" Nov 24 00:34:35.003804 containerd[1991]: 2025-11-24 00:34:34.917 [INFO][4982] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.192/26 handle="k8s-pod-network.8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf" host="ip-172-31-20-18" Nov 24 00:34:35.003804 containerd[1991]: 2025-11-24 00:34:34.920 [INFO][4982] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf Nov 24 00:34:35.003804 containerd[1991]: 2025-11-24 00:34:34.926 [INFO][4982] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.192/26 handle="k8s-pod-network.8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf" host="ip-172-31-20-18" Nov 24 00:34:35.003804 containerd[1991]: 2025-11-24 00:34:34.936 [INFO][4982] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.194/26] block=192.168.54.192/26 handle="k8s-pod-network.8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf" host="ip-172-31-20-18" Nov 24 00:34:35.003804 containerd[1991]: 2025-11-24 00:34:34.936 [INFO][4982] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.194/26] handle="k8s-pod-network.8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf" host="ip-172-31-20-18" Nov 24 00:34:35.003804 containerd[1991]: 2025-11-24 00:34:34.936 [INFO][4982] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:34:35.003804 containerd[1991]: 2025-11-24 00:34:34.936 [INFO][4982] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.194/26] IPv6=[] ContainerID="8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf" HandleID="k8s-pod-network.8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf" Workload="ip--172--31--20--18-k8s-goldmane--666569f655--744x5-eth0" Nov 24 00:34:35.004017 containerd[1991]: 2025-11-24 00:34:34.945 [INFO][4944] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf" Namespace="calico-system" Pod="goldmane-666569f655-744x5" WorkloadEndpoint="ip--172--31--20--18-k8s-goldmane--666569f655--744x5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--18-k8s-goldmane--666569f655--744x5-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b03eefe9-3009-42ea-814c-37b36b40aa2b", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 34, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-18", ContainerID:"", Pod:"goldmane-666569f655-744x5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.54.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic4ad77311c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:34:35.004017 containerd[1991]: 2025-11-24 00:34:34.945 [INFO][4944] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.194/32] ContainerID="8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf" Namespace="calico-system" Pod="goldmane-666569f655-744x5" WorkloadEndpoint="ip--172--31--20--18-k8s-goldmane--666569f655--744x5-eth0" Nov 24 00:34:35.004647 containerd[1991]: 2025-11-24 00:34:34.945 [INFO][4944] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4ad77311c4 ContainerID="8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf" Namespace="calico-system" Pod="goldmane-666569f655-744x5" WorkloadEndpoint="ip--172--31--20--18-k8s-goldmane--666569f655--744x5-eth0" Nov 24 00:34:35.004647 containerd[1991]: 2025-11-24 00:34:34.956 [INFO][4944] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf" Namespace="calico-system" Pod="goldmane-666569f655-744x5" WorkloadEndpoint="ip--172--31--20--18-k8s-goldmane--666569f655--744x5-eth0" Nov 24 00:34:35.005776 containerd[1991]: 2025-11-24 00:34:34.957 [INFO][4944] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf" Namespace="calico-system" Pod="goldmane-666569f655-744x5" WorkloadEndpoint="ip--172--31--20--18-k8s-goldmane--666569f655--744x5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--18-k8s-goldmane--666569f655--744x5-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b03eefe9-3009-42ea-814c-37b36b40aa2b", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 34, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-18", ContainerID:"8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf", Pod:"goldmane-666569f655-744x5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.54.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic4ad77311c4", MAC:"8e:01:ee:24:4a:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:34:35.005937 containerd[1991]: 2025-11-24 00:34:34.974 [INFO][4944] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf" Namespace="calico-system" Pod="goldmane-666569f655-744x5" WorkloadEndpoint="ip--172--31--20--18-k8s-goldmane--666569f655--744x5-eth0" Nov 24 00:34:35.087947 containerd[1991]: time="2025-11-24T00:34:35.087589408Z" level=info msg="connecting to shim 8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf" address="unix:///run/containerd/s/a528e69422b4fd4af90b2613e6570ea2e520069e7f034d1052c729f2e7b03a83" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:34:35.142087 systemd-networkd[1850]: cali30f52c3eecb: Link UP Nov 24 00:34:35.149016 systemd-networkd[1850]: cali30f52c3eecb: Gained carrier Nov 24 00:34:35.208637 containerd[1991]: 2025-11-24 00:34:34.805 [INFO][4946] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--18-k8s-coredns--668d6bf9bc--t4847-eth0 coredns-668d6bf9bc- kube-system 8a4c302a-8212-492a-9e7d-eb959684ea88 801 0 2025-11-24 00:33:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-18 coredns-668d6bf9bc-t4847 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali30f52c3eecb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec" Namespace="kube-system" Pod="coredns-668d6bf9bc-t4847" WorkloadEndpoint="ip--172--31--20--18-k8s-coredns--668d6bf9bc--t4847-" Nov 24 00:34:35.208637 containerd[1991]: 2025-11-24 00:34:34.805 [INFO][4946] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec" Namespace="kube-system" Pod="coredns-668d6bf9bc-t4847" WorkloadEndpoint="ip--172--31--20--18-k8s-coredns--668d6bf9bc--t4847-eth0" Nov 24 00:34:35.208637 containerd[1991]: 2025-11-24 00:34:34.904 [INFO][4984] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec" HandleID="k8s-pod-network.420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec" Workload="ip--172--31--20--18-k8s-coredns--668d6bf9bc--t4847-eth0" Nov 24 00:34:35.208905 containerd[1991]: 2025-11-24 00:34:34.905 [INFO][4984] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec" HandleID="k8s-pod-network.420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec" Workload="ip--172--31--20--18-k8s-coredns--668d6bf9bc--t4847-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4830), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-18", "pod":"coredns-668d6bf9bc-t4847", "timestamp":"2025-11-24 00:34:34.904817903 +0000 UTC"}, Hostname:"ip-172-31-20-18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:34:35.208905 containerd[1991]: 2025-11-24 00:34:34.906 [INFO][4984] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:34:35.208905 containerd[1991]: 2025-11-24 00:34:34.936 [INFO][4984] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:34:35.208905 containerd[1991]: 2025-11-24 00:34:34.937 [INFO][4984] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-18' Nov 24 00:34:35.208905 containerd[1991]: 2025-11-24 00:34:34.999 [INFO][4984] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec" host="ip-172-31-20-18" Nov 24 00:34:35.208905 containerd[1991]: 2025-11-24 00:34:35.016 [INFO][4984] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-18" Nov 24 00:34:35.208905 containerd[1991]: 2025-11-24 00:34:35.026 [INFO][4984] ipam/ipam.go 511: Trying affinity for 192.168.54.192/26 host="ip-172-31-20-18" Nov 24 00:34:35.208905 containerd[1991]: 2025-11-24 00:34:35.032 [INFO][4984] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.192/26 host="ip-172-31-20-18" Nov 24 00:34:35.208905 containerd[1991]: 2025-11-24 00:34:35.044 [INFO][4984] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.192/26 host="ip-172-31-20-18" Nov 24 00:34:35.208905 containerd[1991]: 2025-11-24 00:34:35.044 [INFO][4984] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.192/26 handle="k8s-pod-network.420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec" host="ip-172-31-20-18" Nov 24 00:34:35.209293 containerd[1991]: 2025-11-24 00:34:35.061 [INFO][4984] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec Nov 24 00:34:35.209293 containerd[1991]: 2025-11-24 00:34:35.086 [INFO][4984] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.192/26 handle="k8s-pod-network.420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec" host="ip-172-31-20-18" Nov 24 00:34:35.209293 containerd[1991]: 2025-11-24 00:34:35.102 [INFO][4984] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.195/26] block=192.168.54.192/26 handle="k8s-pod-network.420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec" host="ip-172-31-20-18" Nov 24 00:34:35.209293 containerd[1991]: 2025-11-24 00:34:35.103 [INFO][4984] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.195/26] handle="k8s-pod-network.420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec" host="ip-172-31-20-18" Nov 24 00:34:35.209293 containerd[1991]: 2025-11-24 00:34:35.103 [INFO][4984] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:34:35.209293 containerd[1991]: 2025-11-24 00:34:35.103 [INFO][4984] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.195/26] IPv6=[] ContainerID="420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec" HandleID="k8s-pod-network.420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec" Workload="ip--172--31--20--18-k8s-coredns--668d6bf9bc--t4847-eth0" Nov 24 00:34:35.210775 containerd[1991]: 2025-11-24 00:34:35.118 [INFO][4946] cni-plugin/k8s.go 418: Populated endpoint ContainerID="420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec" Namespace="kube-system" Pod="coredns-668d6bf9bc-t4847" WorkloadEndpoint="ip--172--31--20--18-k8s-coredns--668d6bf9bc--t4847-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--18-k8s-coredns--668d6bf9bc--t4847-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8a4c302a-8212-492a-9e7d-eb959684ea88", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 33, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-18", ContainerID:"", Pod:"coredns-668d6bf9bc-t4847", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali30f52c3eecb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:34:35.210775 containerd[1991]: 2025-11-24 00:34:35.119 [INFO][4946] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.195/32] ContainerID="420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec" Namespace="kube-system" Pod="coredns-668d6bf9bc-t4847" WorkloadEndpoint="ip--172--31--20--18-k8s-coredns--668d6bf9bc--t4847-eth0" Nov 24 00:34:35.210775 containerd[1991]: 2025-11-24 00:34:35.119 [INFO][4946] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali30f52c3eecb ContainerID="420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec" Namespace="kube-system" Pod="coredns-668d6bf9bc-t4847" WorkloadEndpoint="ip--172--31--20--18-k8s-coredns--668d6bf9bc--t4847-eth0" Nov 24 00:34:35.210775 containerd[1991]: 2025-11-24 00:34:35.158 [INFO][4946] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec" Namespace="kube-system" Pod="coredns-668d6bf9bc-t4847" WorkloadEndpoint="ip--172--31--20--18-k8s-coredns--668d6bf9bc--t4847-eth0" Nov 24 00:34:35.210775 containerd[1991]: 2025-11-24 00:34:35.161 [INFO][4946] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec" Namespace="kube-system" Pod="coredns-668d6bf9bc-t4847" WorkloadEndpoint="ip--172--31--20--18-k8s-coredns--668d6bf9bc--t4847-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--18-k8s-coredns--668d6bf9bc--t4847-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8a4c302a-8212-492a-9e7d-eb959684ea88", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 33, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-18", ContainerID:"420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec", Pod:"coredns-668d6bf9bc-t4847", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali30f52c3eecb", MAC:"5a:72:1f:d2:12:b8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:34:35.210775 containerd[1991]: 2025-11-24 00:34:35.196 [INFO][4946] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec" Namespace="kube-system" Pod="coredns-668d6bf9bc-t4847" WorkloadEndpoint="ip--172--31--20--18-k8s-coredns--668d6bf9bc--t4847-eth0" Nov 24 00:34:35.237367 systemd[1]: Started cri-containerd-8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf.scope - libcontainer container 8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf. Nov 24 00:34:35.288167 systemd-networkd[1850]: cali01084730b5a: Link UP Nov 24 00:34:35.288991 systemd-networkd[1850]: cali01084730b5a: Gained carrier Nov 24 00:34:35.308798 containerd[1991]: time="2025-11-24T00:34:35.308750597Z" level=info msg="connecting to shim 420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec" address="unix:///run/containerd/s/619df6c5b2cc4d09b12b9ee99d03c53ffe9aeac30b3ea2dff1ccab2c7ffc8dbf" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:34:35.353812 containerd[1991]: 2025-11-24 00:34:34.816 [INFO][4963] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--18-k8s-coredns--668d6bf9bc--gltt9-eth0 coredns-668d6bf9bc- kube-system db56e410-2d47-4d1e-affe-99108edb5a98 805 0 2025-11-24 00:33:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-18 coredns-668d6bf9bc-gltt9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali01084730b5a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d" Namespace="kube-system" Pod="coredns-668d6bf9bc-gltt9" WorkloadEndpoint="ip--172--31--20--18-k8s-coredns--668d6bf9bc--gltt9-" Nov 24 00:34:35.353812 containerd[1991]: 2025-11-24 00:34:34.817 [INFO][4963] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d" Namespace="kube-system" Pod="coredns-668d6bf9bc-gltt9" WorkloadEndpoint="ip--172--31--20--18-k8s-coredns--668d6bf9bc--gltt9-eth0" Nov 24 00:34:35.353812 containerd[1991]: 2025-11-24 00:34:34.909 [INFO][4986] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d" HandleID="k8s-pod-network.76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d" Workload="ip--172--31--20--18-k8s-coredns--668d6bf9bc--gltt9-eth0" Nov 24 00:34:35.353812 containerd[1991]: 2025-11-24 00:34:34.911 [INFO][4986] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d" HandleID="k8s-pod-network.76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d" Workload="ip--172--31--20--18-k8s-coredns--668d6bf9bc--gltt9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f5f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-18", "pod":"coredns-668d6bf9bc-gltt9", "timestamp":"2025-11-24 00:34:34.909311337 +0000 UTC"}, Hostname:"ip-172-31-20-18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:34:35.353812 containerd[1991]: 2025-11-24 00:34:34.911 [INFO][4986] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:34:35.353812 containerd[1991]: 2025-11-24 00:34:35.103 [INFO][4986] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:34:35.353812 containerd[1991]: 2025-11-24 00:34:35.103 [INFO][4986] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-18' Nov 24 00:34:35.353812 containerd[1991]: 2025-11-24 00:34:35.171 [INFO][4986] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d" host="ip-172-31-20-18" Nov 24 00:34:35.353812 containerd[1991]: 2025-11-24 00:34:35.186 [INFO][4986] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-18" Nov 24 00:34:35.353812 containerd[1991]: 2025-11-24 00:34:35.212 [INFO][4986] ipam/ipam.go 511: Trying affinity for 192.168.54.192/26 host="ip-172-31-20-18" Nov 24 00:34:35.353812 containerd[1991]: 2025-11-24 00:34:35.222 [INFO][4986] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.192/26 host="ip-172-31-20-18" Nov 24 00:34:35.353812 containerd[1991]: 2025-11-24 00:34:35.237 [INFO][4986] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.192/26 host="ip-172-31-20-18" Nov 24 00:34:35.353812 containerd[1991]: 2025-11-24 00:34:35.238 [INFO][4986] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.192/26 handle="k8s-pod-network.76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d" host="ip-172-31-20-18" Nov 24 00:34:35.353812 containerd[1991]: 2025-11-24 00:34:35.243 [INFO][4986] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d Nov 24 00:34:35.353812 containerd[1991]: 2025-11-24 00:34:35.254 [INFO][4986] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.192/26 handle="k8s-pod-network.76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d" host="ip-172-31-20-18" Nov 24 00:34:35.353812 containerd[1991]: 2025-11-24 00:34:35.267 [INFO][4986] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.196/26] block=192.168.54.192/26 handle="k8s-pod-network.76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d" host="ip-172-31-20-18" Nov 24 00:34:35.353812 containerd[1991]: 2025-11-24 00:34:35.267 [INFO][4986] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.196/26] handle="k8s-pod-network.76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d" host="ip-172-31-20-18" Nov 24 00:34:35.353812 containerd[1991]: 2025-11-24 00:34:35.267 [INFO][4986] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:34:35.353812 containerd[1991]: 2025-11-24 00:34:35.268 [INFO][4986] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.196/26] IPv6=[] ContainerID="76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d" HandleID="k8s-pod-network.76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d" Workload="ip--172--31--20--18-k8s-coredns--668d6bf9bc--gltt9-eth0" Nov 24 00:34:35.356289 containerd[1991]: 2025-11-24 00:34:35.276 [INFO][4963] cni-plugin/k8s.go 418: Populated endpoint ContainerID="76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d" Namespace="kube-system" Pod="coredns-668d6bf9bc-gltt9" WorkloadEndpoint="ip--172--31--20--18-k8s-coredns--668d6bf9bc--gltt9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--18-k8s-coredns--668d6bf9bc--gltt9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"db56e410-2d47-4d1e-affe-99108edb5a98", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 33, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-18", ContainerID:"", Pod:"coredns-668d6bf9bc-gltt9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01084730b5a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:34:35.356289 containerd[1991]: 2025-11-24 00:34:35.276 [INFO][4963] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.196/32] ContainerID="76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d" Namespace="kube-system" Pod="coredns-668d6bf9bc-gltt9" WorkloadEndpoint="ip--172--31--20--18-k8s-coredns--668d6bf9bc--gltt9-eth0" Nov 24 00:34:35.356289 containerd[1991]: 2025-11-24 00:34:35.276 [INFO][4963] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01084730b5a ContainerID="76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d" Namespace="kube-system" Pod="coredns-668d6bf9bc-gltt9" WorkloadEndpoint="ip--172--31--20--18-k8s-coredns--668d6bf9bc--gltt9-eth0" Nov 24 00:34:35.356289 containerd[1991]: 2025-11-24 00:34:35.293 [INFO][4963] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d" Namespace="kube-system" Pod="coredns-668d6bf9bc-gltt9" WorkloadEndpoint="ip--172--31--20--18-k8s-coredns--668d6bf9bc--gltt9-eth0" Nov 24 00:34:35.356289 containerd[1991]: 2025-11-24 00:34:35.296 [INFO][4963] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d" Namespace="kube-system" Pod="coredns-668d6bf9bc-gltt9" WorkloadEndpoint="ip--172--31--20--18-k8s-coredns--668d6bf9bc--gltt9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--18-k8s-coredns--668d6bf9bc--gltt9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"db56e410-2d47-4d1e-affe-99108edb5a98", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 33, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-18", ContainerID:"76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d", Pod:"coredns-668d6bf9bc-gltt9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01084730b5a", MAC:"0a:76:8e:b1:e2:16", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:34:35.356289 containerd[1991]: 2025-11-24 00:34:35.346 [INFO][4963] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d" Namespace="kube-system" Pod="coredns-668d6bf9bc-gltt9" WorkloadEndpoint="ip--172--31--20--18-k8s-coredns--668d6bf9bc--gltt9-eth0" Nov 24 00:34:35.374110 systemd[1]: Started cri-containerd-420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec.scope - libcontainer container 420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec. Nov 24 00:34:35.404322 containerd[1991]: time="2025-11-24T00:34:35.404263125Z" level=info msg="connecting to shim 76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d" address="unix:///run/containerd/s/5dcdfa301a3e64473794467c86e436ab3f4b340f7974ebd3f63e26bde2c3dece" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:34:35.457702 systemd[1]: Started cri-containerd-76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d.scope - libcontainer container 76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d. Nov 24 00:34:35.576303 containerd[1991]: time="2025-11-24T00:34:35.576112585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t4847,Uid:8a4c302a-8212-492a-9e7d-eb959684ea88,Namespace:kube-system,Attempt:0,} returns sandbox id \"420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec\"" Nov 24 00:34:35.601777 containerd[1991]: time="2025-11-24T00:34:35.601585671Z" level=info msg="CreateContainer within sandbox \"420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 00:34:35.632798 containerd[1991]: time="2025-11-24T00:34:35.632748737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gltt9,Uid:db56e410-2d47-4d1e-affe-99108edb5a98,Namespace:kube-system,Attempt:0,} returns sandbox id \"76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d\"" Nov 24 00:34:35.637467 containerd[1991]: time="2025-11-24T00:34:35.636880747Z" level=info msg="Container fc1a8267eb8a5f69368e7b017c7bdac705a47b32e3233a717cb35af7e3a4af05: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:34:35.639989 containerd[1991]: time="2025-11-24T00:34:35.639910548Z" level=info msg="CreateContainer within sandbox \"76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 00:34:35.658513 containerd[1991]: time="2025-11-24T00:34:35.658434961Z" level=info msg="CreateContainer within sandbox \"420bde5445fdd7500cc95f819fbc17abad3f1309bbb6448a9d47c554628944ec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fc1a8267eb8a5f69368e7b017c7bdac705a47b32e3233a717cb35af7e3a4af05\"" Nov 24 00:34:35.660070 containerd[1991]: time="2025-11-24T00:34:35.660021054Z" level=info msg="StartContainer for \"fc1a8267eb8a5f69368e7b017c7bdac705a47b32e3233a717cb35af7e3a4af05\"" Nov 24 00:34:35.662950 containerd[1991]: time="2025-11-24T00:34:35.662910247Z" level=info msg="connecting to shim fc1a8267eb8a5f69368e7b017c7bdac705a47b32e3233a717cb35af7e3a4af05" address="unix:///run/containerd/s/619df6c5b2cc4d09b12b9ee99d03c53ffe9aeac30b3ea2dff1ccab2c7ffc8dbf" protocol=ttrpc version=3 Nov 24 00:34:35.669105 containerd[1991]: time="2025-11-24T00:34:35.669050634Z" level=info msg="Container a6b341f5876d263c2005274a8e378a92abdcc6226ed1be0a4132d89c48e3825f: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:34:35.673055 containerd[1991]: time="2025-11-24T00:34:35.673012789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6594b85c5f-dnbvv,Uid:3787437e-985f-4539-b2f2-cf4084ce8482,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:34:35.676042 containerd[1991]: time="2025-11-24T00:34:35.675803812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b9c8d87-7ndft,Uid:6d264742-bc12-4821-8aea-351233494ad9,Namespace:calico-system,Attempt:0,}" Nov 24 00:34:35.676042 containerd[1991]: time="2025-11-24T00:34:35.675845377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6594b85c5f-5gwgt,Uid:e9762179-b934-433c-b377-36b45fd610b8,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:34:35.751094 containerd[1991]: time="2025-11-24T00:34:35.749503125Z" level=info msg="CreateContainer within sandbox \"76b4c48e7d89072120c07d8bf087b5f1c3480061378e99318ce3aa425f48bc7d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a6b341f5876d263c2005274a8e378a92abdcc6226ed1be0a4132d89c48e3825f\"" Nov 24 00:34:35.756574 systemd[1]: Started cri-containerd-fc1a8267eb8a5f69368e7b017c7bdac705a47b32e3233a717cb35af7e3a4af05.scope - libcontainer container fc1a8267eb8a5f69368e7b017c7bdac705a47b32e3233a717cb35af7e3a4af05. Nov 24 00:34:35.762573 containerd[1991]: time="2025-11-24T00:34:35.761323011Z" level=info msg="StartContainer for \"a6b341f5876d263c2005274a8e378a92abdcc6226ed1be0a4132d89c48e3825f\"" Nov 24 00:34:35.797040 containerd[1991]: time="2025-11-24T00:34:35.796989944Z" level=info msg="connecting to shim a6b341f5876d263c2005274a8e378a92abdcc6226ed1be0a4132d89c48e3825f" address="unix:///run/containerd/s/5dcdfa301a3e64473794467c86e436ab3f4b340f7974ebd3f63e26bde2c3dece" protocol=ttrpc version=3 Nov 24 00:34:35.925511 systemd[1]: Started cri-containerd-a6b341f5876d263c2005274a8e378a92abdcc6226ed1be0a4132d89c48e3825f.scope - libcontainer container a6b341f5876d263c2005274a8e378a92abdcc6226ed1be0a4132d89c48e3825f. Nov 24 00:34:35.955626 containerd[1991]: time="2025-11-24T00:34:35.955501955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-744x5,Uid:b03eefe9-3009-42ea-814c-37b36b40aa2b,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ba4e639a99c704d5d68a70c95b043787ca15b3215cfb51e4d643406d3ee1caf\"" Nov 24 00:34:35.962961 containerd[1991]: time="2025-11-24T00:34:35.962854354Z" level=info msg="StartContainer for \"fc1a8267eb8a5f69368e7b017c7bdac705a47b32e3233a717cb35af7e3a4af05\" returns successfully" Nov 24 00:34:35.973551 containerd[1991]: time="2025-11-24T00:34:35.973133876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:34:36.153491 containerd[1991]: time="2025-11-24T00:34:36.153353359Z" level=info msg="StartContainer for \"a6b341f5876d263c2005274a8e378a92abdcc6226ed1be0a4132d89c48e3825f\" returns successfully" Nov 24 00:34:36.371722 containerd[1991]: time="2025-11-24T00:34:36.371353240Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:34:36.380145 containerd[1991]: time="2025-11-24T00:34:36.379720590Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:34:36.380145 containerd[1991]: time="2025-11-24T00:34:36.379774541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:34:36.380344 kubelet[3328]: E1124 00:34:36.380151 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:34:36.380344 kubelet[3328]: E1124 00:34:36.380210 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:34:36.382677 kubelet[3328]: E1124 00:34:36.380789 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dw5xd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-744x5_calico-system(b03eefe9-3009-42ea-814c-37b36b40aa2b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:34:36.382677 kubelet[3328]: E1124 00:34:36.381991 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-744x5" podUID="b03eefe9-3009-42ea-814c-37b36b40aa2b" Nov 24 00:34:36.382498 systemd-networkd[1850]: cali08ad5ebf64e: Link UP Nov 24 00:34:36.385067 systemd-networkd[1850]: cali08ad5ebf64e: Gained carrier Nov 24 00:34:36.439088 containerd[1991]: 2025-11-24 00:34:36.038 [INFO][5189] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--5gwgt-eth0 calico-apiserver-6594b85c5f- calico-apiserver e9762179-b934-433c-b377-36b45fd610b8 804 0 2025-11-24 00:34:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6594b85c5f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-18 calico-apiserver-6594b85c5f-5gwgt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali08ad5ebf64e [] [] }} ContainerID="0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484" Namespace="calico-apiserver" Pod="calico-apiserver-6594b85c5f-5gwgt" WorkloadEndpoint="ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--5gwgt-" Nov 24 00:34:36.439088 containerd[1991]: 2025-11-24 00:34:36.038 [INFO][5189] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484" Namespace="calico-apiserver" Pod="calico-apiserver-6594b85c5f-5gwgt" WorkloadEndpoint="ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--5gwgt-eth0" Nov 24 00:34:36.439088 containerd[1991]: 2025-11-24 00:34:36.169 [INFO][5269] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484" HandleID="k8s-pod-network.0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484" Workload="ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--5gwgt-eth0" Nov 24 00:34:36.439088 containerd[1991]: 2025-11-24 00:34:36.170 [INFO][5269] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484" HandleID="k8s-pod-network.0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484" Workload="ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--5gwgt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001179c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-20-18", "pod":"calico-apiserver-6594b85c5f-5gwgt", "timestamp":"2025-11-24 00:34:36.169792442 +0000 UTC"}, Hostname:"ip-172-31-20-18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:34:36.439088 containerd[1991]: 2025-11-24 00:34:36.170 [INFO][5269] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:34:36.439088 containerd[1991]: 2025-11-24 00:34:36.170 [INFO][5269] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:34:36.439088 containerd[1991]: 2025-11-24 00:34:36.170 [INFO][5269] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-18' Nov 24 00:34:36.439088 containerd[1991]: 2025-11-24 00:34:36.249 [INFO][5269] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484" host="ip-172-31-20-18" Nov 24 00:34:36.439088 containerd[1991]: 2025-11-24 00:34:36.293 [INFO][5269] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-18" Nov 24 00:34:36.439088 containerd[1991]: 2025-11-24 00:34:36.304 [INFO][5269] ipam/ipam.go 511: Trying affinity for 192.168.54.192/26 host="ip-172-31-20-18" Nov 24 00:34:36.439088 containerd[1991]: 2025-11-24 00:34:36.316 [INFO][5269] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.192/26 host="ip-172-31-20-18" Nov 24 00:34:36.439088 containerd[1991]: 2025-11-24 00:34:36.324 [INFO][5269] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.192/26 host="ip-172-31-20-18" Nov 24 00:34:36.439088 containerd[1991]: 2025-11-24 00:34:36.325 [INFO][5269] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.192/26 handle="k8s-pod-network.0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484" host="ip-172-31-20-18" Nov 24 00:34:36.439088 containerd[1991]: 2025-11-24 00:34:36.328 [INFO][5269] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484 Nov 24 00:34:36.439088 containerd[1991]: 2025-11-24 00:34:36.337 [INFO][5269] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.192/26 handle="k8s-pod-network.0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484" host="ip-172-31-20-18" Nov 24 00:34:36.439088 containerd[1991]: 2025-11-24 00:34:36.349 [INFO][5269] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.197/26] block=192.168.54.192/26 handle="k8s-pod-network.0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484" host="ip-172-31-20-18" Nov 24 00:34:36.439088 containerd[1991]: 2025-11-24 00:34:36.350 [INFO][5269] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.197/26] handle="k8s-pod-network.0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484" host="ip-172-31-20-18" Nov 24 00:34:36.439088 containerd[1991]: 2025-11-24 00:34:36.350 [INFO][5269] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:34:36.439088 containerd[1991]: 2025-11-24 00:34:36.350 [INFO][5269] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.197/26] IPv6=[] ContainerID="0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484" HandleID="k8s-pod-network.0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484" Workload="ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--5gwgt-eth0" Nov 24 00:34:36.442163 containerd[1991]: 2025-11-24 00:34:36.358 [INFO][5189] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484" Namespace="calico-apiserver" Pod="calico-apiserver-6594b85c5f-5gwgt" WorkloadEndpoint="ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--5gwgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--5gwgt-eth0", GenerateName:"calico-apiserver-6594b85c5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e9762179-b934-433c-b377-36b45fd610b8", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 34, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6594b85c5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-18", ContainerID:"", Pod:"calico-apiserver-6594b85c5f-5gwgt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali08ad5ebf64e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:34:36.442163 containerd[1991]: 2025-11-24 00:34:36.360 [INFO][5189] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.197/32] ContainerID="0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484" Namespace="calico-apiserver" Pod="calico-apiserver-6594b85c5f-5gwgt" WorkloadEndpoint="ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--5gwgt-eth0" Nov 24 00:34:36.442163 containerd[1991]: 2025-11-24 00:34:36.360 [INFO][5189] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08ad5ebf64e ContainerID="0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484" Namespace="calico-apiserver" Pod="calico-apiserver-6594b85c5f-5gwgt" WorkloadEndpoint="ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--5gwgt-eth0" Nov 24 00:34:36.442163 containerd[1991]: 2025-11-24 00:34:36.388 [INFO][5189] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484" Namespace="calico-apiserver" Pod="calico-apiserver-6594b85c5f-5gwgt" WorkloadEndpoint="ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--5gwgt-eth0" Nov 24 00:34:36.442163 containerd[1991]: 2025-11-24 00:34:36.391 [INFO][5189] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484" Namespace="calico-apiserver" Pod="calico-apiserver-6594b85c5f-5gwgt" WorkloadEndpoint="ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--5gwgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--5gwgt-eth0", GenerateName:"calico-apiserver-6594b85c5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e9762179-b934-433c-b377-36b45fd610b8", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 34, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6594b85c5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-18", ContainerID:"0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484", Pod:"calico-apiserver-6594b85c5f-5gwgt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali08ad5ebf64e", MAC:"a6:c0:f1:76:ca:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:34:36.442163 containerd[1991]: 2025-11-24 00:34:36.427 [INFO][5189] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484" Namespace="calico-apiserver" Pod="calico-apiserver-6594b85c5f-5gwgt" WorkloadEndpoint="ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--5gwgt-eth0" Nov 24 00:34:36.444575 kubelet[3328]: I1124 00:34:36.437233 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-t4847" podStartSLOduration=44.427049231 podStartE2EDuration="44.427049231s" podCreationTimestamp="2025-11-24 00:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:34:36.193344329 +0000 UTC m=+49.712941976" watchObservedRunningTime="2025-11-24 00:34:36.427049231 +0000 UTC m=+49.946646877" Nov 24 00:34:36.498043 containerd[1991]: time="2025-11-24T00:34:36.497984510Z" level=info msg="connecting to shim 0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484" address="unix:///run/containerd/s/f0771b04bafe1f85f29b9ae556bae1b807702c62ef74eafd922d2e2830311574" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:34:36.554165 systemd[1]: Started cri-containerd-0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484.scope - libcontainer container 0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484. Nov 24 00:34:36.578849 systemd-networkd[1850]: cali1663f142c55: Link UP Nov 24 00:34:36.581307 systemd-networkd[1850]: cali1663f142c55: Gained carrier Nov 24 00:34:36.629703 systemd-networkd[1850]: cali01084730b5a: Gained IPv6LL Nov 24 00:34:36.659631 containerd[1991]: 2025-11-24 00:34:36.077 [INFO][5184] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--18-k8s-calico--kube--controllers--68b9c8d87--7ndft-eth0 calico-kube-controllers-68b9c8d87- calico-system 6d264742-bc12-4821-8aea-351233494ad9 807 0 2025-11-24 00:34:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68b9c8d87 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-20-18 calico-kube-controllers-68b9c8d87-7ndft eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1663f142c55 [] [] }} ContainerID="2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28" Namespace="calico-system" Pod="calico-kube-controllers-68b9c8d87-7ndft" WorkloadEndpoint="ip--172--31--20--18-k8s-calico--kube--controllers--68b9c8d87--7ndft-" Nov 24 00:34:36.659631 containerd[1991]: 2025-11-24 00:34:36.077 [INFO][5184] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28" Namespace="calico-system" Pod="calico-kube-controllers-68b9c8d87-7ndft" WorkloadEndpoint="ip--172--31--20--18-k8s-calico--kube--controllers--68b9c8d87--7ndft-eth0" Nov 24 00:34:36.659631 containerd[1991]: 2025-11-24 00:34:36.233 [INFO][5281] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28" HandleID="k8s-pod-network.2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28" Workload="ip--172--31--20--18-k8s-calico--kube--controllers--68b9c8d87--7ndft-eth0" Nov 24 00:34:36.659631 containerd[1991]: 2025-11-24 00:34:36.233 [INFO][5281] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28" HandleID="k8s-pod-network.2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28" Workload="ip--172--31--20--18-k8s-calico--kube--controllers--68b9c8d87--7ndft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000608260), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-18", "pod":"calico-kube-controllers-68b9c8d87-7ndft", "timestamp":"2025-11-24 00:34:36.233065632 +0000 UTC"}, Hostname:"ip-172-31-20-18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:34:36.659631 containerd[1991]: 2025-11-24 00:34:36.233 [INFO][5281] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:34:36.659631 containerd[1991]: 2025-11-24 00:34:36.350 [INFO][5281] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:34:36.659631 containerd[1991]: 2025-11-24 00:34:36.351 [INFO][5281] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-18' Nov 24 00:34:36.659631 containerd[1991]: 2025-11-24 00:34:36.378 [INFO][5281] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28" host="ip-172-31-20-18" Nov 24 00:34:36.659631 containerd[1991]: 2025-11-24 00:34:36.418 [INFO][5281] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-18" Nov 24 00:34:36.659631 containerd[1991]: 2025-11-24 00:34:36.452 [INFO][5281] ipam/ipam.go 511: Trying affinity for 192.168.54.192/26 host="ip-172-31-20-18" Nov 24 00:34:36.659631 containerd[1991]: 2025-11-24 00:34:36.457 [INFO][5281] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.192/26 host="ip-172-31-20-18" Nov 24 00:34:36.659631 containerd[1991]: 2025-11-24 00:34:36.462 [INFO][5281] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.192/26 host="ip-172-31-20-18" Nov 24 00:34:36.659631 containerd[1991]: 2025-11-24 00:34:36.463 [INFO][5281] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.192/26 handle="k8s-pod-network.2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28" host="ip-172-31-20-18" Nov 24 00:34:36.659631 containerd[1991]: 2025-11-24 00:34:36.467 [INFO][5281] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28 Nov 24 00:34:36.659631 containerd[1991]: 2025-11-24 00:34:36.476 [INFO][5281] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.192/26 handle="k8s-pod-network.2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28" host="ip-172-31-20-18" Nov 24 00:34:36.659631 containerd[1991]: 2025-11-24 00:34:36.533 [INFO][5281] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.198/26] block=192.168.54.192/26 handle="k8s-pod-network.2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28" host="ip-172-31-20-18" Nov 24 00:34:36.659631 containerd[1991]: 2025-11-24 00:34:36.533 [INFO][5281] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.198/26] handle="k8s-pod-network.2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28" host="ip-172-31-20-18" Nov 24 00:34:36.659631 containerd[1991]: 2025-11-24 00:34:36.534 [INFO][5281] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:34:36.659631 containerd[1991]: 2025-11-24 00:34:36.535 [INFO][5281] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.198/26] IPv6=[] ContainerID="2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28" HandleID="k8s-pod-network.2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28" Workload="ip--172--31--20--18-k8s-calico--kube--controllers--68b9c8d87--7ndft-eth0" Nov 24 00:34:36.663087 containerd[1991]: 2025-11-24 00:34:36.550 [INFO][5184] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28" Namespace="calico-system" Pod="calico-kube-controllers-68b9c8d87-7ndft" WorkloadEndpoint="ip--172--31--20--18-k8s-calico--kube--controllers--68b9c8d87--7ndft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--18-k8s-calico--kube--controllers--68b9c8d87--7ndft-eth0", GenerateName:"calico-kube-controllers-68b9c8d87-", Namespace:"calico-system", SelfLink:"", UID:"6d264742-bc12-4821-8aea-351233494ad9", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 34, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68b9c8d87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-18", ContainerID:"", Pod:"calico-kube-controllers-68b9c8d87-7ndft", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1663f142c55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:34:36.663087 containerd[1991]: 2025-11-24 00:34:36.550 [INFO][5184] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.198/32] ContainerID="2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28" Namespace="calico-system" Pod="calico-kube-controllers-68b9c8d87-7ndft" WorkloadEndpoint="ip--172--31--20--18-k8s-calico--kube--controllers--68b9c8d87--7ndft-eth0" Nov 24 00:34:36.663087 containerd[1991]: 2025-11-24 00:34:36.550 [INFO][5184] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1663f142c55 ContainerID="2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28" Namespace="calico-system" Pod="calico-kube-controllers-68b9c8d87-7ndft" WorkloadEndpoint="ip--172--31--20--18-k8s-calico--kube--controllers--68b9c8d87--7ndft-eth0" Nov 24 00:34:36.663087 containerd[1991]: 2025-11-24 00:34:36.581 [INFO][5184] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28" Namespace="calico-system" Pod="calico-kube-controllers-68b9c8d87-7ndft" WorkloadEndpoint="ip--172--31--20--18-k8s-calico--kube--controllers--68b9c8d87--7ndft-eth0" Nov 24 00:34:36.663087 containerd[1991]: 2025-11-24 00:34:36.583 [INFO][5184] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28" Namespace="calico-system" Pod="calico-kube-controllers-68b9c8d87-7ndft" WorkloadEndpoint="ip--172--31--20--18-k8s-calico--kube--controllers--68b9c8d87--7ndft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--18-k8s-calico--kube--controllers--68b9c8d87--7ndft-eth0", GenerateName:"calico-kube-controllers-68b9c8d87-", Namespace:"calico-system", SelfLink:"", UID:"6d264742-bc12-4821-8aea-351233494ad9", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 34, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68b9c8d87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-18", ContainerID:"2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28", Pod:"calico-kube-controllers-68b9c8d87-7ndft", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1663f142c55", MAC:"2e:84:e4:7e:ef:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:34:36.663087 containerd[1991]: 2025-11-24 00:34:36.651 [INFO][5184] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28" Namespace="calico-system" Pod="calico-kube-controllers-68b9c8d87-7ndft" WorkloadEndpoint="ip--172--31--20--18-k8s-calico--kube--controllers--68b9c8d87--7ndft-eth0" Nov 24 00:34:36.682478 containerd[1991]: time="2025-11-24T00:34:36.681589371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-48vsh,Uid:96ab7330-0514-4b4d-8ac0-0b3305cdbb91,Namespace:calico-system,Attempt:0,}" Nov 24 00:34:36.793216 systemd-networkd[1850]: cali45706cd2d60: Link UP Nov 24 00:34:36.794648 systemd-networkd[1850]: cali45706cd2d60: Gained carrier Nov 24 00:34:36.819929 containerd[1991]: time="2025-11-24T00:34:36.819833422Z" level=info msg="connecting to shim 2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28" address="unix:///run/containerd/s/c7aa264a73a31d744bb8ab3f36d78bf61d9c4481b986a9cd426d5a408f77a9aa" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:34:36.834642 containerd[1991]: 2025-11-24 00:34:36.072 [INFO][5181] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--dnbvv-eth0 calico-apiserver-6594b85c5f- calico-apiserver 3787437e-985f-4539-b2f2-cf4084ce8482 806 0 2025-11-24 00:34:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6594b85c5f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-18 calico-apiserver-6594b85c5f-dnbvv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali45706cd2d60 [] [] }} ContainerID="46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1" Namespace="calico-apiserver" Pod="calico-apiserver-6594b85c5f-dnbvv" WorkloadEndpoint="ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--dnbvv-" Nov 24 00:34:36.834642 containerd[1991]: 2025-11-24 00:34:36.072 [INFO][5181] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1" Namespace="calico-apiserver" Pod="calico-apiserver-6594b85c5f-dnbvv" WorkloadEndpoint="ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--dnbvv-eth0" Nov 24 00:34:36.834642 containerd[1991]: 2025-11-24 00:34:36.263 [INFO][5279] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1" HandleID="k8s-pod-network.46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1" Workload="ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--dnbvv-eth0" Nov 24 00:34:36.834642 containerd[1991]: 2025-11-24 00:34:36.268 [INFO][5279] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1" HandleID="k8s-pod-network.46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1" Workload="ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--dnbvv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb6f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-20-18", "pod":"calico-apiserver-6594b85c5f-dnbvv", "timestamp":"2025-11-24 00:34:36.26341248 +0000 UTC"}, Hostname:"ip-172-31-20-18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:34:36.834642 containerd[1991]: 2025-11-24 00:34:36.268 [INFO][5279] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:34:36.834642 containerd[1991]: 2025-11-24 00:34:36.535 [INFO][5279] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:34:36.834642 containerd[1991]: 2025-11-24 00:34:36.536 [INFO][5279] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-18' Nov 24 00:34:36.834642 containerd[1991]: 2025-11-24 00:34:36.580 [INFO][5279] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1" host="ip-172-31-20-18" Nov 24 00:34:36.834642 containerd[1991]: 2025-11-24 00:34:36.600 [INFO][5279] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-18" Nov 24 00:34:36.834642 containerd[1991]: 2025-11-24 00:34:36.678 [INFO][5279] ipam/ipam.go 511: Trying affinity for 192.168.54.192/26 host="ip-172-31-20-18" Nov 24 00:34:36.834642 containerd[1991]: 2025-11-24 00:34:36.690 [INFO][5279] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.192/26 host="ip-172-31-20-18" Nov 24 00:34:36.834642 containerd[1991]: 2025-11-24 00:34:36.699 [INFO][5279] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.192/26 host="ip-172-31-20-18" Nov 24 00:34:36.834642 containerd[1991]: 2025-11-24 00:34:36.700 [INFO][5279] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.192/26 handle="k8s-pod-network.46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1" host="ip-172-31-20-18" Nov 24 00:34:36.834642 containerd[1991]: 2025-11-24 00:34:36.708 [INFO][5279] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1 Nov 24 00:34:36.834642 containerd[1991]: 2025-11-24 00:34:36.721 [INFO][5279] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.192/26 handle="k8s-pod-network.46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1" host="ip-172-31-20-18" Nov 24 00:34:36.834642 containerd[1991]: 2025-11-24 00:34:36.752 [INFO][5279] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.199/26] block=192.168.54.192/26 handle="k8s-pod-network.46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1" host="ip-172-31-20-18" Nov 24 00:34:36.834642 containerd[1991]: 2025-11-24 00:34:36.752 [INFO][5279] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.199/26] handle="k8s-pod-network.46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1" host="ip-172-31-20-18" Nov 24 00:34:36.834642 containerd[1991]: 2025-11-24 00:34:36.752 [INFO][5279] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:34:36.834642 containerd[1991]: 2025-11-24 00:34:36.752 [INFO][5279] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.199/26] IPv6=[] ContainerID="46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1" HandleID="k8s-pod-network.46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1" Workload="ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--dnbvv-eth0" Nov 24 00:34:36.837720 containerd[1991]: 2025-11-24 00:34:36.780 [INFO][5181] cni-plugin/k8s.go 418: Populated endpoint ContainerID="46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1" Namespace="calico-apiserver" Pod="calico-apiserver-6594b85c5f-dnbvv" WorkloadEndpoint="ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--dnbvv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--dnbvv-eth0", GenerateName:"calico-apiserver-6594b85c5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3787437e-985f-4539-b2f2-cf4084ce8482", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 34, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6594b85c5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-18", ContainerID:"", Pod:"calico-apiserver-6594b85c5f-dnbvv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali45706cd2d60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:34:36.837720 containerd[1991]: 2025-11-24 00:34:36.782 [INFO][5181] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.199/32] ContainerID="46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1" Namespace="calico-apiserver" Pod="calico-apiserver-6594b85c5f-dnbvv" WorkloadEndpoint="ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--dnbvv-eth0" Nov 24 00:34:36.837720 containerd[1991]: 2025-11-24 00:34:36.782 [INFO][5181] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali45706cd2d60 ContainerID="46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1" Namespace="calico-apiserver" Pod="calico-apiserver-6594b85c5f-dnbvv" WorkloadEndpoint="ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--dnbvv-eth0" Nov 24 00:34:36.837720 containerd[1991]: 2025-11-24 00:34:36.795 [INFO][5181] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1" Namespace="calico-apiserver" Pod="calico-apiserver-6594b85c5f-dnbvv" WorkloadEndpoint="ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--dnbvv-eth0" Nov 24 00:34:36.837720 containerd[1991]: 2025-11-24 00:34:36.795 [INFO][5181] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1" Namespace="calico-apiserver" Pod="calico-apiserver-6594b85c5f-dnbvv" WorkloadEndpoint="ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--dnbvv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--dnbvv-eth0", GenerateName:"calico-apiserver-6594b85c5f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3787437e-985f-4539-b2f2-cf4084ce8482", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 34, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6594b85c5f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-18", ContainerID:"46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1", Pod:"calico-apiserver-6594b85c5f-dnbvv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali45706cd2d60", MAC:"3e:a0:5e:c7:a6:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:34:36.837720 containerd[1991]: 2025-11-24 00:34:36.817 [INFO][5181] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1" Namespace="calico-apiserver" Pod="calico-apiserver-6594b85c5f-dnbvv" WorkloadEndpoint="ip--172--31--20--18-k8s-calico--apiserver--6594b85c5f--dnbvv-eth0" Nov 24 00:34:36.900751 systemd[1]: Started cri-containerd-2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28.scope - libcontainer container 2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28. Nov 24 00:34:36.951443 containerd[1991]: time="2025-11-24T00:34:36.951387886Z" level=info msg="connecting to shim 46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1" address="unix:///run/containerd/s/466854cc587f44fa4225d33d87280576e9936f87982d8414ec22ee2c3ef790c2" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:34:37.013671 systemd-networkd[1850]: calic4ad77311c4: Gained IPv6LL Nov 24 00:34:37.043522 containerd[1991]: time="2025-11-24T00:34:37.043474760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6594b85c5f-5gwgt,Uid:e9762179-b934-433c-b377-36b45fd610b8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0a86f63eaab79c92bfc71a4f66350269b9a12db1b2207fbd664e04c4226b6484\"" Nov 24 00:34:37.050802 containerd[1991]: time="2025-11-24T00:34:37.050759860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:34:37.063973 systemd[1]: Started cri-containerd-46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1.scope - libcontainer container 46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1. Nov 24 00:34:37.141623 systemd-networkd[1850]: cali30f52c3eecb: Gained IPv6LL Nov 24 00:34:37.212546 kubelet[3328]: E1124 00:34:37.209129 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-744x5" podUID="b03eefe9-3009-42ea-814c-37b36b40aa2b" Nov 24 00:34:37.222584 systemd-networkd[1850]: cali84f457d4a8a: Link UP Nov 24 00:34:37.225689 systemd-networkd[1850]: cali84f457d4a8a: Gained carrier Nov 24 00:34:37.238729 containerd[1991]: time="2025-11-24T00:34:37.238670325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b9c8d87-7ndft,Uid:6d264742-bc12-4821-8aea-351233494ad9,Namespace:calico-system,Attempt:0,} returns sandbox id \"2d1641d8023ad6557fbffa6bac32577648a411e57655c84cbc9c73dd53cfee28\"" Nov 24 00:34:37.274471 containerd[1991]: 2025-11-24 00:34:36.937 [INFO][5357] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--18-k8s-csi--node--driver--48vsh-eth0 csi-node-driver- calico-system 96ab7330-0514-4b4d-8ac0-0b3305cdbb91 706 0 2025-11-24 00:34:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-20-18 csi-node-driver-48vsh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali84f457d4a8a [] [] }} ContainerID="2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a" Namespace="calico-system" Pod="csi-node-driver-48vsh" WorkloadEndpoint="ip--172--31--20--18-k8s-csi--node--driver--48vsh-" Nov 24 00:34:37.274471 containerd[1991]: 2025-11-24 00:34:36.940 [INFO][5357] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a" Namespace="calico-system" Pod="csi-node-driver-48vsh" WorkloadEndpoint="ip--172--31--20--18-k8s-csi--node--driver--48vsh-eth0" Nov 24 00:34:37.274471 containerd[1991]: 2025-11-24 00:34:37.092 [INFO][5428] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a" HandleID="k8s-pod-network.2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a" Workload="ip--172--31--20--18-k8s-csi--node--driver--48vsh-eth0" Nov 24 00:34:37.274471 containerd[1991]: 2025-11-24 00:34:37.093 [INFO][5428] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a" HandleID="k8s-pod-network.2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a" Workload="ip--172--31--20--18-k8s-csi--node--driver--48vsh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5950), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-18", "pod":"csi-node-driver-48vsh", "timestamp":"2025-11-24 00:34:37.092830303 +0000 UTC"}, Hostname:"ip-172-31-20-18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:34:37.274471 containerd[1991]: 2025-11-24 00:34:37.093 [INFO][5428] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:34:37.274471 containerd[1991]: 2025-11-24 00:34:37.093 [INFO][5428] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:34:37.274471 containerd[1991]: 2025-11-24 00:34:37.093 [INFO][5428] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-18' Nov 24 00:34:37.274471 containerd[1991]: 2025-11-24 00:34:37.107 [INFO][5428] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a" host="ip-172-31-20-18" Nov 24 00:34:37.274471 containerd[1991]: 2025-11-24 00:34:37.120 [INFO][5428] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-18" Nov 24 00:34:37.274471 containerd[1991]: 2025-11-24 00:34:37.132 [INFO][5428] ipam/ipam.go 511: Trying affinity for 192.168.54.192/26 host="ip-172-31-20-18" Nov 24 00:34:37.274471 containerd[1991]: 2025-11-24 00:34:37.135 [INFO][5428] ipam/ipam.go 158: Attempting to load block cidr=192.168.54.192/26 host="ip-172-31-20-18" Nov 24 00:34:37.274471 containerd[1991]: 2025-11-24 00:34:37.146 [INFO][5428] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.54.192/26 host="ip-172-31-20-18" Nov 24 00:34:37.274471 containerd[1991]: 2025-11-24 00:34:37.147 [INFO][5428] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.54.192/26 handle="k8s-pod-network.2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a" host="ip-172-31-20-18" Nov 24 00:34:37.274471 containerd[1991]: 2025-11-24 00:34:37.152 [INFO][5428] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a Nov 24 00:34:37.274471 containerd[1991]: 2025-11-24 00:34:37.160 [INFO][5428] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.54.192/26 handle="k8s-pod-network.2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a" host="ip-172-31-20-18" Nov 24 00:34:37.274471 containerd[1991]: 2025-11-24 00:34:37.180 [INFO][5428] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.54.200/26] block=192.168.54.192/26 handle="k8s-pod-network.2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a" host="ip-172-31-20-18" Nov 24 00:34:37.274471 containerd[1991]: 2025-11-24 00:34:37.180 [INFO][5428] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.54.200/26] handle="k8s-pod-network.2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a" host="ip-172-31-20-18" Nov 24 00:34:37.274471 containerd[1991]: 2025-11-24 00:34:37.180 [INFO][5428] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:34:37.274471 containerd[1991]: 2025-11-24 00:34:37.180 [INFO][5428] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.54.200/26] IPv6=[] ContainerID="2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a" HandleID="k8s-pod-network.2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a" Workload="ip--172--31--20--18-k8s-csi--node--driver--48vsh-eth0" Nov 24 00:34:37.279603 containerd[1991]: 2025-11-24 00:34:37.202 [INFO][5357] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a" Namespace="calico-system" Pod="csi-node-driver-48vsh" WorkloadEndpoint="ip--172--31--20--18-k8s-csi--node--driver--48vsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--18-k8s-csi--node--driver--48vsh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"96ab7330-0514-4b4d-8ac0-0b3305cdbb91", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 34, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-18", ContainerID:"", Pod:"csi-node-driver-48vsh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali84f457d4a8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:34:37.279603 containerd[1991]: 2025-11-24 00:34:37.205 [INFO][5357] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.54.200/32] ContainerID="2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a" Namespace="calico-system" Pod="csi-node-driver-48vsh" WorkloadEndpoint="ip--172--31--20--18-k8s-csi--node--driver--48vsh-eth0" Nov 24 00:34:37.279603 containerd[1991]: 2025-11-24 00:34:37.206 [INFO][5357] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84f457d4a8a ContainerID="2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a" Namespace="calico-system" Pod="csi-node-driver-48vsh" WorkloadEndpoint="ip--172--31--20--18-k8s-csi--node--driver--48vsh-eth0" Nov 24 00:34:37.279603 containerd[1991]: 2025-11-24 00:34:37.225 [INFO][5357] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a" Namespace="calico-system" Pod="csi-node-driver-48vsh" WorkloadEndpoint="ip--172--31--20--18-k8s-csi--node--driver--48vsh-eth0" Nov 24 00:34:37.279603 containerd[1991]: 2025-11-24 00:34:37.225 [INFO][5357] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a" Namespace="calico-system" Pod="csi-node-driver-48vsh" WorkloadEndpoint="ip--172--31--20--18-k8s-csi--node--driver--48vsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--18-k8s-csi--node--driver--48vsh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"96ab7330-0514-4b4d-8ac0-0b3305cdbb91", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 34, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-18", ContainerID:"2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a", Pod:"csi-node-driver-48vsh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali84f457d4a8a", MAC:"ee:b7:6b:58:27:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:34:37.279603 containerd[1991]: 2025-11-24 00:34:37.266 [INFO][5357] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a" Namespace="calico-system" Pod="csi-node-driver-48vsh" WorkloadEndpoint="ip--172--31--20--18-k8s-csi--node--driver--48vsh-eth0" Nov 24 00:34:37.325478 containerd[1991]: time="2025-11-24T00:34:37.323312170Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:34:37.328126 containerd[1991]: time="2025-11-24T00:34:37.325793183Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:34:37.328126 containerd[1991]: time="2025-11-24T00:34:37.325906315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:34:37.339273 containerd[1991]: time="2025-11-24T00:34:37.338086106Z" level=info msg="connecting to shim 2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a" address="unix:///run/containerd/s/14d1b92c2aeafa9416f9732ec936cb8711a957ed973c4766618bcf85e18813d7" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:34:37.385598 kubelet[3328]: E1124 00:34:37.385397 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:34:37.386205 kubelet[3328]: E1124 00:34:37.386165 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:34:37.395199 kubelet[3328]: E1124 00:34:37.394609 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gxvrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6594b85c5f-5gwgt_calico-apiserver(e9762179-b934-433c-b377-36b45fd610b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:34:37.405469 containerd[1991]: time="2025-11-24T00:34:37.405376485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:34:37.420221 systemd[1]: Started cri-containerd-2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a.scope - libcontainer container 2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a. Nov 24 00:34:37.426470 kubelet[3328]: E1124 00:34:37.424360 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594b85c5f-5gwgt" podUID="e9762179-b934-433c-b377-36b45fd610b8" Nov 24 00:34:37.446470 kubelet[3328]: I1124 00:34:37.446138 3328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gltt9" podStartSLOduration=45.446119095 podStartE2EDuration="45.446119095s" podCreationTimestamp="2025-11-24 00:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:34:37.404169965 +0000 UTC m=+50.923767612" watchObservedRunningTime="2025-11-24 00:34:37.446119095 +0000 UTC m=+50.965716742" Nov 24 00:34:37.580944 containerd[1991]: time="2025-11-24T00:34:37.580909289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6594b85c5f-dnbvv,Uid:3787437e-985f-4539-b2f2-cf4084ce8482,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"46527bd14edac13bcb9330a8d304f38fd9bcc74542d4492aac719fccaee3d2b1\"" Nov 24 00:34:37.670788 containerd[1991]: time="2025-11-24T00:34:37.670727928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-48vsh,Uid:96ab7330-0514-4b4d-8ac0-0b3305cdbb91,Namespace:calico-system,Attempt:0,} returns sandbox id \"2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a\"" Nov 24 00:34:37.677808 kubelet[3328]: E1124 00:34:37.651839 3328 cadvisor_stats_provider.go:522] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96ab7330_0514_4b4d_8ac0_0b3305cdbb91.slice/cri-containerd-2c890cfd50a61c8972c45db45f1f04791460f17ed3a9d63cedd8046e34879e5a.scope\": RecentStats: unable to find data in memory cache]" Nov 24 00:34:37.717787 systemd-networkd[1850]: cali1663f142c55: Gained IPv6LL Nov 24 00:34:37.808470 containerd[1991]: time="2025-11-24T00:34:37.808306856Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:34:37.810735 containerd[1991]: time="2025-11-24T00:34:37.810680130Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:34:37.810882 containerd[1991]: time="2025-11-24T00:34:37.810790648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:34:37.812470 kubelet[3328]: E1124 00:34:37.811103 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:34:37.812470 kubelet[3328]: E1124 00:34:37.811177 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:34:37.812470 kubelet[3328]: E1124 00:34:37.812316 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-769nd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68b9c8d87-7ndft_calico-system(6d264742-bc12-4821-8aea-351233494ad9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:34:37.813408 containerd[1991]: time="2025-11-24T00:34:37.813132868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:34:37.814263 kubelet[3328]: E1124 00:34:37.814217 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68b9c8d87-7ndft" podUID="6d264742-bc12-4821-8aea-351233494ad9" Nov 24 00:34:38.038692 systemd-networkd[1850]: cali08ad5ebf64e: Gained IPv6LL Nov 24 00:34:38.098817 containerd[1991]: time="2025-11-24T00:34:38.098778212Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:34:38.101603 containerd[1991]: time="2025-11-24T00:34:38.101442676Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:34:38.101952 systemd-networkd[1850]: cali45706cd2d60: Gained IPv6LL Nov 24 00:34:38.102900 containerd[1991]: time="2025-11-24T00:34:38.101663958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:34:38.104208 kubelet[3328]: E1124 00:34:38.103577 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:34:38.104208 kubelet[3328]: E1124 00:34:38.103655 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:34:38.104208 kubelet[3328]: E1124 00:34:38.104004 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sptth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6594b85c5f-dnbvv_calico-apiserver(3787437e-985f-4539-b2f2-cf4084ce8482): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:34:38.105834 containerd[1991]: time="2025-11-24T00:34:38.105217630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:34:38.105899 kubelet[3328]: E1124 00:34:38.105747 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594b85c5f-dnbvv" podUID="3787437e-985f-4539-b2f2-cf4084ce8482" Nov 24 00:34:38.207231 kubelet[3328]: E1124 00:34:38.207126 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594b85c5f-dnbvv" podUID="3787437e-985f-4539-b2f2-cf4084ce8482" Nov 24 00:34:38.210725 kubelet[3328]: E1124 00:34:38.210661 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68b9c8d87-7ndft" podUID="6d264742-bc12-4821-8aea-351233494ad9" Nov 24 00:34:38.210725 kubelet[3328]: E1124 00:34:38.210663 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594b85c5f-5gwgt" podUID="e9762179-b934-433c-b377-36b45fd610b8" Nov 24 00:34:38.324143 systemd[1]: Started sshd@9-172.31.20.18:22-139.178.89.65:35660.service - OpenSSH per-connection server daemon (139.178.89.65:35660). Nov 24 00:34:38.405272 containerd[1991]: time="2025-11-24T00:34:38.404726219Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:34:38.408332 containerd[1991]: time="2025-11-24T00:34:38.408246037Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:34:38.408626 containerd[1991]: time="2025-11-24T00:34:38.408494351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:34:38.408802 kubelet[3328]: E1124 00:34:38.408763 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:34:38.409599 kubelet[3328]: E1124 00:34:38.408811 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:34:38.409599 kubelet[3328]: E1124 00:34:38.408968 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9whk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-48vsh_calico-system(96ab7330-0514-4b4d-8ac0-0b3305cdbb91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:34:38.414173 containerd[1991]: time="2025-11-24T00:34:38.413934676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:34:38.640517 sshd[5554]: Accepted publickey for core from 139.178.89.65 port 35660 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:34:38.644227 sshd-session[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:34:38.654540 systemd-logind[1963]: New session 10 of user core. Nov 24 00:34:38.665319 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 24 00:34:38.665545 containerd[1991]: time="2025-11-24T00:34:38.665347817Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:34:38.672243 containerd[1991]: time="2025-11-24T00:34:38.669754908Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:34:38.672243 containerd[1991]: time="2025-11-24T00:34:38.669896625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:34:38.672769 kubelet[3328]: E1124 00:34:38.672626 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:34:38.673290 kubelet[3328]: E1124 00:34:38.673255 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:34:38.673670 kubelet[3328]: E1124 00:34:38.673562 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9whk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-48vsh_calico-system(96ab7330-0514-4b4d-8ac0-0b3305cdbb91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:34:38.675774 kubelet[3328]: E1124 00:34:38.675708 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-48vsh" podUID="96ab7330-0514-4b4d-8ac0-0b3305cdbb91" Nov 24 00:34:39.062051 systemd-networkd[1850]: cali84f457d4a8a: Gained IPv6LL Nov 24 00:34:39.212404 kubelet[3328]: E1124 00:34:39.211294 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68b9c8d87-7ndft" podUID="6d264742-bc12-4821-8aea-351233494ad9" Nov 24 00:34:39.212635 kubelet[3328]: E1124 00:34:39.212568 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-48vsh" podUID="96ab7330-0514-4b4d-8ac0-0b3305cdbb91" Nov 24 00:34:39.213613 kubelet[3328]: E1124 00:34:39.213566 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594b85c5f-dnbvv" podUID="3787437e-985f-4539-b2f2-cf4084ce8482" Nov 24 00:34:39.767750 sshd[5557]: Connection closed by 139.178.89.65 port 35660 Nov 24 00:34:39.766687 sshd-session[5554]: pam_unix(sshd:session): session closed for user core Nov 24 00:34:39.785176 systemd[1]: sshd@9-172.31.20.18:22-139.178.89.65:35660.service: Deactivated successfully. Nov 24 00:34:39.791027 systemd[1]: session-10.scope: Deactivated successfully. Nov 24 00:34:39.793844 systemd-logind[1963]: Session 10 logged out. Waiting for processes to exit. Nov 24 00:34:39.797050 systemd-logind[1963]: Removed session 10. Nov 24 00:34:41.808178 ntpd[2180]: Listen normally on 9 calic4ad77311c4 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 24 00:34:41.808246 ntpd[2180]: Listen normally on 10 cali30f52c3eecb [fe80::ecee:eeff:feee:eeee%9]:123 Nov 24 00:34:41.808739 ntpd[2180]: 24 Nov 00:34:41 ntpd[2180]: Listen normally on 9 calic4ad77311c4 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 24 00:34:41.808739 ntpd[2180]: 24 Nov 00:34:41 ntpd[2180]: Listen normally on 10 cali30f52c3eecb [fe80::ecee:eeff:feee:eeee%9]:123 Nov 24 00:34:41.808739 ntpd[2180]: 24 Nov 00:34:41 ntpd[2180]: Listen normally on 11 cali01084730b5a [fe80::ecee:eeff:feee:eeee%10]:123 Nov 24 00:34:41.808739 ntpd[2180]: 24 Nov 00:34:41 ntpd[2180]: Listen normally on 12 cali08ad5ebf64e [fe80::ecee:eeff:feee:eeee%11]:123 Nov 24 00:34:41.808739 ntpd[2180]: 24 Nov 00:34:41 ntpd[2180]: Listen normally on 13 cali1663f142c55 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 24 00:34:41.808739 ntpd[2180]: 24 Nov 00:34:41 ntpd[2180]: Listen normally on 14 cali45706cd2d60 [fe80::ecee:eeff:feee:eeee%13]:123 Nov 24 00:34:41.808739 ntpd[2180]: 24 Nov 00:34:41 ntpd[2180]: Listen normally on 15 cali84f457d4a8a [fe80::ecee:eeff:feee:eeee%14]:123 Nov 24 00:34:41.808273 ntpd[2180]: Listen normally on 11 cali01084730b5a [fe80::ecee:eeff:feee:eeee%10]:123 Nov 24 00:34:41.808299 ntpd[2180]: Listen normally on 12 cali08ad5ebf64e [fe80::ecee:eeff:feee:eeee%11]:123 Nov 24 00:34:41.808325 ntpd[2180]: Listen normally on 13 cali1663f142c55 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 24 00:34:41.808352 ntpd[2180]: Listen normally on 14 cali45706cd2d60 [fe80::ecee:eeff:feee:eeee%13]:123 Nov 24 00:34:41.808377 ntpd[2180]: Listen normally on 15 cali84f457d4a8a [fe80::ecee:eeff:feee:eeee%14]:123 Nov 24 00:34:44.802497 systemd[1]: Started sshd@10-172.31.20.18:22-139.178.89.65:48754.service - OpenSSH per-connection server daemon (139.178.89.65:48754). Nov 24 00:34:45.007045 sshd[5581]: Accepted publickey for core from 139.178.89.65 port 48754 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:34:45.012260 sshd-session[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:34:45.038073 systemd-logind[1963]: New session 11 of user core. Nov 24 00:34:45.044184 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 24 00:34:45.313724 sshd[5584]: Connection closed by 139.178.89.65 port 48754 Nov 24 00:34:45.314283 sshd-session[5581]: pam_unix(sshd:session): session closed for user core Nov 24 00:34:45.321287 systemd[1]: sshd@10-172.31.20.18:22-139.178.89.65:48754.service: Deactivated successfully. Nov 24 00:34:45.324669 systemd[1]: session-11.scope: Deactivated successfully. Nov 24 00:34:45.325763 systemd-logind[1963]: Session 11 logged out. Waiting for processes to exit. Nov 24 00:34:45.327965 systemd-logind[1963]: Removed session 11. Nov 24 00:34:46.756745 containerd[1991]: time="2025-11-24T00:34:46.755411274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:34:47.025272 containerd[1991]: time="2025-11-24T00:34:47.025151700Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:34:47.027436 containerd[1991]: time="2025-11-24T00:34:47.027320370Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:34:47.027436 containerd[1991]: time="2025-11-24T00:34:47.027372773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:34:47.027640 kubelet[3328]: E1124 00:34:47.027604 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:34:47.028179 kubelet[3328]: E1124 00:34:47.027651 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:34:47.028179 kubelet[3328]: E1124 00:34:47.027896 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b38025872b7d4fa89ea0f1fb92dc0334,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f6n9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84bf774ffb-kpf6d_calico-system(6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:34:47.030181 containerd[1991]: time="2025-11-24T00:34:47.030109867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:34:47.277737 containerd[1991]: time="2025-11-24T00:34:47.277610051Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:34:47.316758 containerd[1991]: time="2025-11-24T00:34:47.316693652Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:34:47.316940 containerd[1991]: time="2025-11-24T00:34:47.316779996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:34:47.316979 kubelet[3328]: E1124 00:34:47.316939 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:34:47.317027 kubelet[3328]: E1124 00:34:47.316980 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:34:47.323660 kubelet[3328]: E1124 00:34:47.317105 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f6n9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84bf774ffb-kpf6d_calico-system(6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:34:47.326165 kubelet[3328]: E1124 00:34:47.324820 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84bf774ffb-kpf6d" podUID="6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef" Nov 24 00:34:47.670814 containerd[1991]: time="2025-11-24T00:34:47.670583794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:34:47.959116 containerd[1991]: time="2025-11-24T00:34:47.959000907Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:34:47.961237 containerd[1991]: time="2025-11-24T00:34:47.961182422Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:34:47.961377 containerd[1991]: time="2025-11-24T00:34:47.961286619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:34:47.961574 kubelet[3328]: E1124 00:34:47.961517 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:34:47.961657 kubelet[3328]: E1124 00:34:47.961590 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:34:47.961884 kubelet[3328]: E1124 00:34:47.961806 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dw5xd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-744x5_calico-system(b03eefe9-3009-42ea-814c-37b36b40aa2b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:34:47.964348 kubelet[3328]: E1124 00:34:47.964135 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-744x5" podUID="b03eefe9-3009-42ea-814c-37b36b40aa2b" Nov 24 00:34:50.359062 systemd[1]: Started sshd@11-172.31.20.18:22-139.178.89.65:40502.service - OpenSSH per-connection server daemon (139.178.89.65:40502). Nov 24 00:34:50.540641 sshd[5601]: Accepted publickey for core from 139.178.89.65 port 40502 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:34:50.542204 sshd-session[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:34:50.548559 systemd-logind[1963]: New session 12 of user core. Nov 24 00:34:50.553804 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 24 00:34:50.675644 containerd[1991]: time="2025-11-24T00:34:50.675011895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:34:50.763854 sshd[5604]: Connection closed by 139.178.89.65 port 40502 Nov 24 00:34:50.764704 sshd-session[5601]: pam_unix(sshd:session): session closed for user core Nov 24 00:34:50.772509 systemd-logind[1963]: Session 12 logged out. Waiting for processes to exit. Nov 24 00:34:50.773141 systemd[1]: sshd@11-172.31.20.18:22-139.178.89.65:40502.service: Deactivated successfully. Nov 24 00:34:50.775676 systemd[1]: session-12.scope: Deactivated successfully. Nov 24 00:34:50.779249 systemd-logind[1963]: Removed session 12. Nov 24 00:34:50.799302 systemd[1]: Started sshd@12-172.31.20.18:22-139.178.89.65:40514.service - OpenSSH per-connection server daemon (139.178.89.65:40514). Nov 24 00:34:50.950892 containerd[1991]: time="2025-11-24T00:34:50.950719033Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:34:50.953009 containerd[1991]: time="2025-11-24T00:34:50.952908952Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:34:50.953009 containerd[1991]: time="2025-11-24T00:34:50.952974396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:34:50.953315 kubelet[3328]: E1124 00:34:50.953280 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:34:50.953632 kubelet[3328]: E1124 00:34:50.953522 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:34:50.955469 containerd[1991]: time="2025-11-24T00:34:50.954707626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:34:50.955578 kubelet[3328]: E1124 00:34:50.954562 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-769nd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68b9c8d87-7ndft_calico-system(6d264742-bc12-4821-8aea-351233494ad9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:34:50.956616 kubelet[3328]: E1124 00:34:50.956550 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68b9c8d87-7ndft" podUID="6d264742-bc12-4821-8aea-351233494ad9" Nov 24 00:34:50.999513 sshd[5617]: Accepted publickey for core from 139.178.89.65 port 40514 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:34:51.001408 sshd-session[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:34:51.011556 systemd-logind[1963]: New session 13 of user core. Nov 24 00:34:51.018976 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 24 00:34:51.257188 containerd[1991]: time="2025-11-24T00:34:51.256992874Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:34:51.263735 containerd[1991]: time="2025-11-24T00:34:51.261471536Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:34:51.263735 containerd[1991]: time="2025-11-24T00:34:51.261568345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:34:51.263951 kubelet[3328]: E1124 00:34:51.261668 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:34:51.263951 kubelet[3328]: E1124 00:34:51.261719 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:34:51.263951 kubelet[3328]: E1124 00:34:51.261872 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gxvrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6594b85c5f-5gwgt_calico-apiserver(e9762179-b934-433c-b377-36b45fd610b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:34:51.267041 kubelet[3328]: E1124 00:34:51.266723 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594b85c5f-5gwgt" podUID="e9762179-b934-433c-b377-36b45fd610b8" Nov 24 00:34:51.357434 sshd[5620]: Connection closed by 139.178.89.65 port 40514 Nov 24 00:34:51.365537 sshd-session[5617]: pam_unix(sshd:session): session closed for user core Nov 24 00:34:51.373583 systemd[1]: sshd@12-172.31.20.18:22-139.178.89.65:40514.service: Deactivated successfully. Nov 24 00:34:51.376668 systemd-logind[1963]: Session 13 logged out. Waiting for processes to exit. Nov 24 00:34:51.379990 systemd[1]: session-13.scope: Deactivated successfully. Nov 24 00:34:51.406231 systemd-logind[1963]: Removed session 13. Nov 24 00:34:51.412687 systemd[1]: Started sshd@13-172.31.20.18:22-139.178.89.65:40530.service - OpenSSH per-connection server daemon (139.178.89.65:40530). Nov 24 00:34:51.630645 sshd[5630]: Accepted publickey for core from 139.178.89.65 port 40530 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:34:51.633815 sshd-session[5630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:34:51.643524 systemd-logind[1963]: New session 14 of user core. Nov 24 00:34:51.651018 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 24 00:34:51.676920 containerd[1991]: time="2025-11-24T00:34:51.676141076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:34:51.898091 sshd[5633]: Connection closed by 139.178.89.65 port 40530 Nov 24 00:34:51.898844 sshd-session[5630]: pam_unix(sshd:session): session closed for user core Nov 24 00:34:51.904619 systemd[1]: sshd@13-172.31.20.18:22-139.178.89.65:40530.service: Deactivated successfully. Nov 24 00:34:51.907352 systemd[1]: session-14.scope: Deactivated successfully. Nov 24 00:34:51.908983 systemd-logind[1963]: Session 14 logged out. Waiting for processes to exit. Nov 24 00:34:51.911143 systemd-logind[1963]: Removed session 14. Nov 24 00:34:51.962213 containerd[1991]: time="2025-11-24T00:34:51.962156460Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:34:51.964312 containerd[1991]: time="2025-11-24T00:34:51.964256822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:34:51.964696 containerd[1991]: time="2025-11-24T00:34:51.964279207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:34:51.964812 kubelet[3328]: E1124 00:34:51.964548 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:34:51.964812 kubelet[3328]: E1124 00:34:51.964762 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:34:51.966015 kubelet[3328]: E1124 00:34:51.965111 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sptth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6594b85c5f-dnbvv_calico-apiserver(3787437e-985f-4539-b2f2-cf4084ce8482): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:34:51.966316 containerd[1991]: time="2025-11-24T00:34:51.965932573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:34:51.967167 kubelet[3328]: E1124 00:34:51.967112 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594b85c5f-dnbvv" podUID="3787437e-985f-4539-b2f2-cf4084ce8482" Nov 24 00:34:52.219075 containerd[1991]: time="2025-11-24T00:34:52.218946134Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:34:52.221231 containerd[1991]: time="2025-11-24T00:34:52.221089805Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:34:52.221231 containerd[1991]: time="2025-11-24T00:34:52.221200340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:34:52.221710 kubelet[3328]: E1124 00:34:52.221655 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:34:52.221710 kubelet[3328]: E1124 00:34:52.221709 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:34:52.221877 kubelet[3328]: E1124 00:34:52.221816 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9whk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-48vsh_calico-system(96ab7330-0514-4b4d-8ac0-0b3305cdbb91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:34:52.225376 containerd[1991]: time="2025-11-24T00:34:52.225262430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:34:52.506254 containerd[1991]: time="2025-11-24T00:34:52.506195235Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:34:52.508464 containerd[1991]: time="2025-11-24T00:34:52.508393295Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:34:52.508612 containerd[1991]: time="2025-11-24T00:34:52.508401136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:34:52.508780 kubelet[3328]: E1124 00:34:52.508698 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:34:52.508780 kubelet[3328]: E1124 00:34:52.508756 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:34:52.508956 kubelet[3328]: E1124 00:34:52.508866 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9whk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-48vsh_calico-system(96ab7330-0514-4b4d-8ac0-0b3305cdbb91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:34:52.510700 kubelet[3328]: E1124 00:34:52.510120 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-48vsh" podUID="96ab7330-0514-4b4d-8ac0-0b3305cdbb91" Nov 24 00:34:56.931091 systemd[1]: Started sshd@14-172.31.20.18:22-139.178.89.65:40532.service - OpenSSH per-connection server daemon (139.178.89.65:40532). Nov 24 00:34:57.101515 sshd[5658]: Accepted publickey for core from 139.178.89.65 port 40532 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:34:57.104916 sshd-session[5658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:34:57.116582 systemd-logind[1963]: New session 15 of user core. Nov 24 00:34:57.125731 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 24 00:34:57.334687 sshd[5661]: Connection closed by 139.178.89.65 port 40532 Nov 24 00:34:57.335267 sshd-session[5658]: pam_unix(sshd:session): session closed for user core Nov 24 00:34:57.341744 systemd[1]: sshd@14-172.31.20.18:22-139.178.89.65:40532.service: Deactivated successfully. Nov 24 00:34:57.344717 systemd[1]: session-15.scope: Deactivated successfully. Nov 24 00:34:57.346024 systemd-logind[1963]: Session 15 logged out. Waiting for processes to exit. Nov 24 00:34:57.348908 systemd-logind[1963]: Removed session 15. Nov 24 00:34:58.679424 kubelet[3328]: E1124 00:34:58.679363 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84bf774ffb-kpf6d" podUID="6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef" Nov 24 00:35:01.679560 kubelet[3328]: E1124 00:35:01.679481 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-744x5" podUID="b03eefe9-3009-42ea-814c-37b36b40aa2b" Nov 24 00:35:02.474010 systemd[1]: Started sshd@15-172.31.20.18:22-139.178.89.65:54238.service - OpenSSH per-connection server daemon (139.178.89.65:54238). Nov 24 00:35:03.149815 sshd[5698]: Accepted publickey for core from 139.178.89.65 port 54238 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:35:03.168937 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:35:03.196886 systemd-logind[1963]: New session 16 of user core. Nov 24 00:35:03.214868 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 24 00:35:03.781474 sshd[5701]: Connection closed by 139.178.89.65 port 54238 Nov 24 00:35:03.783082 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Nov 24 00:35:03.802341 systemd[1]: sshd@15-172.31.20.18:22-139.178.89.65:54238.service: Deactivated successfully. Nov 24 00:35:03.810939 systemd[1]: session-16.scope: Deactivated successfully. Nov 24 00:35:03.812509 systemd-logind[1963]: Session 16 logged out. Waiting for processes to exit. Nov 24 00:35:03.815392 systemd-logind[1963]: Removed session 16. Nov 24 00:35:05.671102 kubelet[3328]: E1124 00:35:05.671043 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-48vsh" podUID="96ab7330-0514-4b4d-8ac0-0b3305cdbb91" Nov 24 00:35:06.676355 kubelet[3328]: E1124 00:35:06.676230 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68b9c8d87-7ndft" podUID="6d264742-bc12-4821-8aea-351233494ad9" Nov 24 00:35:06.678558 kubelet[3328]: E1124 00:35:06.677431 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594b85c5f-dnbvv" podUID="3787437e-985f-4539-b2f2-cf4084ce8482" Nov 24 00:35:06.678558 kubelet[3328]: E1124 00:35:06.677541 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594b85c5f-5gwgt" podUID="e9762179-b934-433c-b377-36b45fd610b8" Nov 24 00:35:08.823972 systemd[1]: Started sshd@16-172.31.20.18:22-139.178.89.65:54252.service - OpenSSH per-connection server daemon (139.178.89.65:54252). Nov 24 00:35:09.062516 sshd[5717]: Accepted publickey for core from 139.178.89.65 port 54252 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:35:09.067284 sshd-session[5717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:35:09.077610 systemd-logind[1963]: New session 17 of user core. Nov 24 00:35:09.083819 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 24 00:35:09.360326 sshd[5720]: Connection closed by 139.178.89.65 port 54252 Nov 24 00:35:09.361395 sshd-session[5717]: pam_unix(sshd:session): session closed for user core Nov 24 00:35:09.368584 systemd-logind[1963]: Session 17 logged out. Waiting for processes to exit. Nov 24 00:35:09.369266 systemd[1]: sshd@16-172.31.20.18:22-139.178.89.65:54252.service: Deactivated successfully. Nov 24 00:35:09.371984 systemd[1]: session-17.scope: Deactivated successfully. Nov 24 00:35:09.376708 systemd-logind[1963]: Removed session 17. Nov 24 00:35:09.671479 containerd[1991]: time="2025-11-24T00:35:09.671354397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:35:09.927832 containerd[1991]: time="2025-11-24T00:35:09.927501984Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:35:09.929683 containerd[1991]: time="2025-11-24T00:35:09.929623770Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:35:09.929870 containerd[1991]: time="2025-11-24T00:35:09.929651468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:35:09.929941 kubelet[3328]: E1124 00:35:09.929900 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:35:09.930258 kubelet[3328]: E1124 00:35:09.929953 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:35:09.930258 kubelet[3328]: E1124 00:35:09.930069 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b38025872b7d4fa89ea0f1fb92dc0334,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f6n9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84bf774ffb-kpf6d_calico-system(6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:35:09.933119 containerd[1991]: time="2025-11-24T00:35:09.933085595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:35:10.237307 containerd[1991]: time="2025-11-24T00:35:10.237259708Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:35:10.239534 containerd[1991]: time="2025-11-24T00:35:10.239377172Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:35:10.239534 containerd[1991]: time="2025-11-24T00:35:10.239427098Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:35:10.239893 kubelet[3328]: E1124 00:35:10.239645 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:35:10.239893 kubelet[3328]: E1124 00:35:10.239785 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:35:10.240026 kubelet[3328]: E1124 00:35:10.239979 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f6n9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84bf774ffb-kpf6d_calico-system(6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:35:10.241250 kubelet[3328]: E1124 00:35:10.241112 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84bf774ffb-kpf6d" podUID="6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef" Nov 24 00:35:14.400135 systemd[1]: Started sshd@17-172.31.20.18:22-139.178.89.65:47918.service - OpenSSH per-connection server daemon (139.178.89.65:47918). Nov 24 00:35:14.627195 sshd[5741]: Accepted publickey for core from 139.178.89.65 port 47918 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:35:14.630211 sshd-session[5741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:35:14.640441 systemd-logind[1963]: New session 18 of user core. Nov 24 00:35:14.644997 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 24 00:35:14.987662 sshd[5744]: Connection closed by 139.178.89.65 port 47918 Nov 24 00:35:14.989770 sshd-session[5741]: pam_unix(sshd:session): session closed for user core Nov 24 00:35:15.003612 systemd[1]: sshd@17-172.31.20.18:22-139.178.89.65:47918.service: Deactivated successfully. Nov 24 00:35:15.013746 systemd[1]: session-18.scope: Deactivated successfully. Nov 24 00:35:15.035364 systemd-logind[1963]: Session 18 logged out. Waiting for processes to exit. Nov 24 00:35:15.037479 systemd[1]: Started sshd@18-172.31.20.18:22-139.178.89.65:47932.service - OpenSSH per-connection server daemon (139.178.89.65:47932). Nov 24 00:35:15.039983 systemd-logind[1963]: Removed session 18. Nov 24 00:35:15.233547 sshd[5756]: Accepted publickey for core from 139.178.89.65 port 47932 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:35:15.235137 sshd-session[5756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:35:15.242334 systemd-logind[1963]: New session 19 of user core. Nov 24 00:35:15.251880 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 24 00:35:16.065555 sshd[5759]: Connection closed by 139.178.89.65 port 47932 Nov 24 00:35:16.067216 sshd-session[5756]: pam_unix(sshd:session): session closed for user core Nov 24 00:35:16.074407 systemd[1]: sshd@18-172.31.20.18:22-139.178.89.65:47932.service: Deactivated successfully. Nov 24 00:35:16.085346 systemd[1]: session-19.scope: Deactivated successfully. Nov 24 00:35:16.093293 systemd-logind[1963]: Session 19 logged out. Waiting for processes to exit. Nov 24 00:35:16.113954 systemd[1]: Started sshd@19-172.31.20.18:22-139.178.89.65:47946.service - OpenSSH per-connection server daemon (139.178.89.65:47946). Nov 24 00:35:16.116083 systemd-logind[1963]: Removed session 19. Nov 24 00:35:16.323810 sshd[5769]: Accepted publickey for core from 139.178.89.65 port 47946 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:35:16.325610 sshd-session[5769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:35:16.331864 systemd-logind[1963]: New session 20 of user core. Nov 24 00:35:16.340974 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 24 00:35:16.678542 containerd[1991]: time="2025-11-24T00:35:16.678127580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:35:16.936055 containerd[1991]: time="2025-11-24T00:35:16.935052795Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:35:16.937570 containerd[1991]: time="2025-11-24T00:35:16.937375972Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:35:16.937731 containerd[1991]: time="2025-11-24T00:35:16.937444743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:35:16.938188 kubelet[3328]: E1124 00:35:16.938078 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:35:16.939874 kubelet[3328]: E1124 00:35:16.938163 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:35:16.940515 kubelet[3328]: E1124 00:35:16.940273 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dw5xd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-744x5_calico-system(b03eefe9-3009-42ea-814c-37b36b40aa2b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:35:16.941530 kubelet[3328]: E1124 00:35:16.941480 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-744x5" podUID="b03eefe9-3009-42ea-814c-37b36b40aa2b" Nov 24 00:35:17.278790 sshd[5772]: Connection closed by 139.178.89.65 port 47946 Nov 24 00:35:17.284735 sshd-session[5769]: pam_unix(sshd:session): session closed for user core Nov 24 00:35:17.318739 systemd[1]: sshd@19-172.31.20.18:22-139.178.89.65:47946.service: Deactivated successfully. Nov 24 00:35:17.324777 systemd[1]: session-20.scope: Deactivated successfully. Nov 24 00:35:17.326877 systemd-logind[1963]: Session 20 logged out. Waiting for processes to exit. Nov 24 00:35:17.336498 systemd[1]: Started sshd@20-172.31.20.18:22-139.178.89.65:47962.service - OpenSSH per-connection server daemon (139.178.89.65:47962). Nov 24 00:35:17.344167 systemd-logind[1963]: Removed session 20. Nov 24 00:35:17.569195 sshd[5792]: Accepted publickey for core from 139.178.89.65 port 47962 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:35:17.571439 sshd-session[5792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:35:17.581698 systemd-logind[1963]: New session 21 of user core. Nov 24 00:35:17.588781 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 24 00:35:17.671115 containerd[1991]: time="2025-11-24T00:35:17.671023532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:35:17.948615 containerd[1991]: time="2025-11-24T00:35:17.948280439Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:35:17.951138 containerd[1991]: time="2025-11-24T00:35:17.950984744Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:35:17.951138 containerd[1991]: time="2025-11-24T00:35:17.951104628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:35:17.951513 kubelet[3328]: E1124 00:35:17.951433 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:35:17.952058 kubelet[3328]: E1124 00:35:17.951526 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:35:17.953581 kubelet[3328]: E1124 00:35:17.953522 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9whk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-48vsh_calico-system(96ab7330-0514-4b4d-8ac0-0b3305cdbb91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:35:17.957293 containerd[1991]: time="2025-11-24T00:35:17.957000167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:35:18.227151 containerd[1991]: time="2025-11-24T00:35:18.227096206Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:35:18.229290 containerd[1991]: time="2025-11-24T00:35:18.229227057Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:35:18.230534 containerd[1991]: time="2025-11-24T00:35:18.229268050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:35:18.230735 kubelet[3328]: E1124 00:35:18.230689 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:35:18.230835 kubelet[3328]: E1124 00:35:18.230764 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:35:18.231344 kubelet[3328]: E1124 00:35:18.231165 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9whk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-48vsh_calico-system(96ab7330-0514-4b4d-8ac0-0b3305cdbb91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:35:18.233049 kubelet[3328]: E1124 00:35:18.233001 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-48vsh" podUID="96ab7330-0514-4b4d-8ac0-0b3305cdbb91" Nov 24 00:35:18.476837 sshd[5797]: Connection closed by 139.178.89.65 port 47962 Nov 24 00:35:18.477261 sshd-session[5792]: pam_unix(sshd:session): session closed for user core Nov 24 00:35:18.488348 systemd[1]: sshd@20-172.31.20.18:22-139.178.89.65:47962.service: Deactivated successfully. Nov 24 00:35:18.492742 systemd[1]: session-21.scope: Deactivated successfully. Nov 24 00:35:18.497434 systemd-logind[1963]: Session 21 logged out. Waiting for processes to exit. Nov 24 00:35:18.510336 systemd[1]: Started sshd@21-172.31.20.18:22-139.178.89.65:47976.service - OpenSSH per-connection server daemon (139.178.89.65:47976). Nov 24 00:35:18.512017 systemd-logind[1963]: Removed session 21. Nov 24 00:35:18.675929 containerd[1991]: time="2025-11-24T00:35:18.675609715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:35:18.732553 sshd[5807]: Accepted publickey for core from 139.178.89.65 port 47976 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:35:18.736323 sshd-session[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:35:18.747062 systemd-logind[1963]: New session 22 of user core. Nov 24 00:35:18.753910 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 24 00:35:18.928169 containerd[1991]: time="2025-11-24T00:35:18.927984411Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:35:18.931031 containerd[1991]: time="2025-11-24T00:35:18.930224192Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:35:18.931031 containerd[1991]: time="2025-11-24T00:35:18.930314281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:35:18.931602 kubelet[3328]: E1124 00:35:18.931551 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:35:18.931938 kubelet[3328]: E1124 00:35:18.931616 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:35:18.932478 kubelet[3328]: E1124 00:35:18.932207 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-769nd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68b9c8d87-7ndft_calico-system(6d264742-bc12-4821-8aea-351233494ad9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:35:18.933770 kubelet[3328]: E1124 00:35:18.933539 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68b9c8d87-7ndft" podUID="6d264742-bc12-4821-8aea-351233494ad9" Nov 24 00:35:19.002296 sshd[5810]: Connection closed by 139.178.89.65 port 47976 Nov 24 00:35:19.004265 sshd-session[5807]: pam_unix(sshd:session): session closed for user core Nov 24 00:35:19.014127 systemd[1]: sshd@21-172.31.20.18:22-139.178.89.65:47976.service: Deactivated successfully. Nov 24 00:35:19.017464 systemd[1]: session-22.scope: Deactivated successfully. Nov 24 00:35:19.020144 systemd-logind[1963]: Session 22 logged out. Waiting for processes to exit. Nov 24 00:35:19.022182 systemd-logind[1963]: Removed session 22. Nov 24 00:35:19.675639 containerd[1991]: time="2025-11-24T00:35:19.675482214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:35:19.965018 containerd[1991]: time="2025-11-24T00:35:19.964880204Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:35:19.967557 containerd[1991]: time="2025-11-24T00:35:19.967486406Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:35:19.967762 containerd[1991]: time="2025-11-24T00:35:19.967484427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:35:19.968002 kubelet[3328]: E1124 00:35:19.967952 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:35:19.968398 kubelet[3328]: E1124 00:35:19.968011 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:35:19.968398 kubelet[3328]: E1124 00:35:19.968174 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gxvrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6594b85c5f-5gwgt_calico-apiserver(e9762179-b934-433c-b377-36b45fd610b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:35:19.969392 kubelet[3328]: E1124 00:35:19.969340 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594b85c5f-5gwgt" podUID="e9762179-b934-433c-b377-36b45fd610b8" Nov 24 00:35:21.670475 containerd[1991]: time="2025-11-24T00:35:21.670392446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:35:21.968676 containerd[1991]: time="2025-11-24T00:35:21.968529534Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:35:21.970851 containerd[1991]: time="2025-11-24T00:35:21.970665910Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:35:21.970851 containerd[1991]: time="2025-11-24T00:35:21.970717682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:35:21.971088 kubelet[3328]: E1124 00:35:21.971008 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:35:21.971088 kubelet[3328]: E1124 00:35:21.971067 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:35:21.972329 kubelet[3328]: E1124 00:35:21.972240 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sptth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6594b85c5f-dnbvv_calico-apiserver(3787437e-985f-4539-b2f2-cf4084ce8482): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:35:21.974426 kubelet[3328]: E1124 00:35:21.974381 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594b85c5f-dnbvv" podUID="3787437e-985f-4539-b2f2-cf4084ce8482" Nov 24 00:35:22.674788 kubelet[3328]: E1124 00:35:22.674716 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84bf774ffb-kpf6d" podUID="6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef" Nov 24 00:35:24.055183 systemd[1]: Started sshd@22-172.31.20.18:22-139.178.89.65:40758.service - OpenSSH per-connection server daemon (139.178.89.65:40758). Nov 24 00:35:24.244721 sshd[5826]: Accepted publickey for core from 139.178.89.65 port 40758 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:35:24.246017 sshd-session[5826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:35:24.252500 systemd-logind[1963]: New session 23 of user core. Nov 24 00:35:24.259955 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 24 00:35:24.490801 sshd[5829]: Connection closed by 139.178.89.65 port 40758 Nov 24 00:35:24.491429 sshd-session[5826]: pam_unix(sshd:session): session closed for user core Nov 24 00:35:24.496634 systemd[1]: sshd@22-172.31.20.18:22-139.178.89.65:40758.service: Deactivated successfully. Nov 24 00:35:24.499379 systemd[1]: session-23.scope: Deactivated successfully. Nov 24 00:35:24.501153 systemd-logind[1963]: Session 23 logged out. Waiting for processes to exit. Nov 24 00:35:24.503125 systemd-logind[1963]: Removed session 23. Nov 24 00:35:29.536663 systemd[1]: Started sshd@23-172.31.20.18:22-139.178.89.65:41504.service - OpenSSH per-connection server daemon (139.178.89.65:41504). Nov 24 00:35:29.557665 update_engine[1966]: I20251124 00:35:29.557595 1966 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 24 00:35:29.558623 update_engine[1966]: I20251124 00:35:29.558044 1966 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 24 00:35:29.562289 update_engine[1966]: I20251124 00:35:29.562249 1966 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 24 00:35:29.563470 update_engine[1966]: I20251124 00:35:29.563097 1966 omaha_request_params.cc:62] Current group set to beta Nov 24 00:35:29.564580 update_engine[1966]: I20251124 00:35:29.564546 1966 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 24 00:35:29.567708 update_engine[1966]: I20251124 00:35:29.565472 1966 update_attempter.cc:643] Scheduling an action processor start. Nov 24 00:35:29.567708 update_engine[1966]: I20251124 00:35:29.565523 1966 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 24 00:35:29.567708 update_engine[1966]: I20251124 00:35:29.565585 1966 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 24 00:35:29.567708 update_engine[1966]: I20251124 00:35:29.565675 1966 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 24 00:35:29.567708 update_engine[1966]: I20251124 00:35:29.565683 1966 omaha_request_action.cc:272] Request: Nov 24 00:35:29.567708 update_engine[1966]: Nov 24 00:35:29.567708 update_engine[1966]: Nov 24 00:35:29.567708 update_engine[1966]: Nov 24 00:35:29.567708 update_engine[1966]: Nov 24 00:35:29.567708 update_engine[1966]: Nov 24 00:35:29.567708 update_engine[1966]: Nov 24 00:35:29.567708 update_engine[1966]: Nov 24 00:35:29.567708 update_engine[1966]: Nov 24 00:35:29.567708 update_engine[1966]: I20251124 00:35:29.565691 1966 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 24 00:35:29.601503 update_engine[1966]: I20251124 00:35:29.601114 1966 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 24 00:35:29.604110 update_engine[1966]: I20251124 00:35:29.604019 1966 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 24 00:35:29.619148 update_engine[1966]: E20251124 00:35:29.619095 1966 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 24 00:35:29.635089 update_engine[1966]: I20251124 00:35:29.635035 1966 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 24 00:35:29.642466 locksmithd[2021]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 24 00:35:29.673116 kubelet[3328]: E1124 00:35:29.672637 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68b9c8d87-7ndft" podUID="6d264742-bc12-4821-8aea-351233494ad9" Nov 24 00:35:29.837829 sshd[5841]: Accepted publickey for core from 139.178.89.65 port 41504 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:35:29.840958 sshd-session[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:35:29.850647 systemd-logind[1963]: New session 24 of user core. Nov 24 00:35:29.857763 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 24 00:35:30.235013 sshd[5846]: Connection closed by 139.178.89.65 port 41504 Nov 24 00:35:30.235765 sshd-session[5841]: pam_unix(sshd:session): session closed for user core Nov 24 00:35:30.245585 systemd-logind[1963]: Session 24 logged out. Waiting for processes to exit. Nov 24 00:35:30.247536 systemd[1]: sshd@23-172.31.20.18:22-139.178.89.65:41504.service: Deactivated successfully. Nov 24 00:35:30.250397 systemd[1]: session-24.scope: Deactivated successfully. Nov 24 00:35:30.256074 systemd-logind[1963]: Removed session 24. Nov 24 00:35:30.675607 kubelet[3328]: E1124 00:35:30.674666 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-744x5" podUID="b03eefe9-3009-42ea-814c-37b36b40aa2b" Nov 24 00:35:30.678306 kubelet[3328]: E1124 00:35:30.678216 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-48vsh" podUID="96ab7330-0514-4b4d-8ac0-0b3305cdbb91" Nov 24 00:35:34.676038 kubelet[3328]: E1124 00:35:34.675988 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84bf774ffb-kpf6d" podUID="6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef" Nov 24 00:35:35.272624 systemd[1]: Started sshd@24-172.31.20.18:22-139.178.89.65:41520.service - OpenSSH per-connection server daemon (139.178.89.65:41520). Nov 24 00:35:35.512936 sshd[5882]: Accepted publickey for core from 139.178.89.65 port 41520 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:35:35.517636 sshd-session[5882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:35:35.528894 systemd-logind[1963]: New session 25 of user core. Nov 24 00:35:35.537675 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 24 00:35:35.671910 kubelet[3328]: E1124 00:35:35.671864 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594b85c5f-5gwgt" podUID="e9762179-b934-433c-b377-36b45fd610b8" Nov 24 00:35:36.221582 sshd[5885]: Connection closed by 139.178.89.65 port 41520 Nov 24 00:35:36.223895 sshd-session[5882]: pam_unix(sshd:session): session closed for user core Nov 24 00:35:36.232797 systemd[1]: sshd@24-172.31.20.18:22-139.178.89.65:41520.service: Deactivated successfully. Nov 24 00:35:36.238408 systemd[1]: session-25.scope: Deactivated successfully. Nov 24 00:35:36.241083 systemd-logind[1963]: Session 25 logged out. Waiting for processes to exit. Nov 24 00:35:36.244242 systemd-logind[1963]: Removed session 25. Nov 24 00:35:36.675623 kubelet[3328]: E1124 00:35:36.675576 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594b85c5f-dnbvv" podUID="3787437e-985f-4539-b2f2-cf4084ce8482" Nov 24 00:35:39.448171 update_engine[1966]: I20251124 00:35:39.448089 1966 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 24 00:35:39.448577 update_engine[1966]: I20251124 00:35:39.448211 1966 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 24 00:35:39.449748 update_engine[1966]: I20251124 00:35:39.449695 1966 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 24 00:35:39.451134 update_engine[1966]: E20251124 00:35:39.451067 1966 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 24 00:35:39.451229 update_engine[1966]: I20251124 00:35:39.451171 1966 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 24 00:35:41.264263 systemd[1]: Started sshd@25-172.31.20.18:22-139.178.89.65:40492.service - OpenSSH per-connection server daemon (139.178.89.65:40492). Nov 24 00:35:41.501442 sshd[5897]: Accepted publickey for core from 139.178.89.65 port 40492 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:35:41.504339 sshd-session[5897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:35:41.510763 systemd-logind[1963]: New session 26 of user core. Nov 24 00:35:41.518703 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 24 00:35:41.872674 sshd[5900]: Connection closed by 139.178.89.65 port 40492 Nov 24 00:35:41.876541 sshd-session[5897]: pam_unix(sshd:session): session closed for user core Nov 24 00:35:41.890827 systemd[1]: sshd@25-172.31.20.18:22-139.178.89.65:40492.service: Deactivated successfully. Nov 24 00:35:41.898800 systemd[1]: session-26.scope: Deactivated successfully. Nov 24 00:35:41.905286 systemd-logind[1963]: Session 26 logged out. Waiting for processes to exit. Nov 24 00:35:41.908672 systemd-logind[1963]: Removed session 26. Nov 24 00:35:43.671488 kubelet[3328]: E1124 00:35:43.671411 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68b9c8d87-7ndft" podUID="6d264742-bc12-4821-8aea-351233494ad9" Nov 24 00:35:43.673251 kubelet[3328]: E1124 00:35:43.672642 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-48vsh" podUID="96ab7330-0514-4b4d-8ac0-0b3305cdbb91" Nov 24 00:35:45.671470 kubelet[3328]: E1124 00:35:45.670943 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-744x5" podUID="b03eefe9-3009-42ea-814c-37b36b40aa2b" Nov 24 00:35:46.677165 kubelet[3328]: E1124 00:35:46.677106 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84bf774ffb-kpf6d" podUID="6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef" Nov 24 00:35:46.911932 systemd[1]: Started sshd@26-172.31.20.18:22-139.178.89.65:40506.service - OpenSSH per-connection server daemon (139.178.89.65:40506). Nov 24 00:35:47.112605 sshd[5913]: Accepted publickey for core from 139.178.89.65 port 40506 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:35:47.114802 sshd-session[5913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:35:47.121880 systemd-logind[1963]: New session 27 of user core. Nov 24 00:35:47.128671 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 24 00:35:47.503174 sshd[5916]: Connection closed by 139.178.89.65 port 40506 Nov 24 00:35:47.506710 sshd-session[5913]: pam_unix(sshd:session): session closed for user core Nov 24 00:35:47.513493 systemd-logind[1963]: Session 27 logged out. Waiting for processes to exit. Nov 24 00:35:47.513952 systemd[1]: sshd@26-172.31.20.18:22-139.178.89.65:40506.service: Deactivated successfully. Nov 24 00:35:47.518271 systemd[1]: session-27.scope: Deactivated successfully. Nov 24 00:35:47.524169 systemd-logind[1963]: Removed session 27. Nov 24 00:35:47.670534 kubelet[3328]: E1124 00:35:47.670437 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594b85c5f-dnbvv" podUID="3787437e-985f-4539-b2f2-cf4084ce8482" Nov 24 00:35:49.446576 update_engine[1966]: I20251124 00:35:49.446496 1966 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 24 00:35:49.447116 update_engine[1966]: I20251124 00:35:49.446601 1966 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 24 00:35:49.447116 update_engine[1966]: I20251124 00:35:49.447045 1966 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 24 00:35:49.448359 update_engine[1966]: E20251124 00:35:49.448318 1966 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 24 00:35:49.448561 update_engine[1966]: I20251124 00:35:49.448423 1966 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 24 00:35:49.671945 kubelet[3328]: E1124 00:35:49.671873 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594b85c5f-5gwgt" podUID="e9762179-b934-433c-b377-36b45fd610b8" Nov 24 00:35:52.546799 systemd[1]: Started sshd@27-172.31.20.18:22-139.178.89.65:55404.service - OpenSSH per-connection server daemon (139.178.89.65:55404). Nov 24 00:35:52.744282 sshd[5934]: Accepted publickey for core from 139.178.89.65 port 55404 ssh2: RSA SHA256:/bCMGSOGigmzHBfmwKmKdP2EUzY9oQNIAYJfV+lr0sI Nov 24 00:35:52.747168 sshd-session[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:35:52.774018 systemd-logind[1963]: New session 28 of user core. Nov 24 00:35:52.778677 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 24 00:35:53.092472 sshd[5937]: Connection closed by 139.178.89.65 port 55404 Nov 24 00:35:53.092713 sshd-session[5934]: pam_unix(sshd:session): session closed for user core Nov 24 00:35:53.100146 systemd[1]: sshd@27-172.31.20.18:22-139.178.89.65:55404.service: Deactivated successfully. Nov 24 00:35:53.103593 systemd[1]: session-28.scope: Deactivated successfully. Nov 24 00:35:53.105501 systemd-logind[1963]: Session 28 logged out. Waiting for processes to exit. Nov 24 00:35:53.109955 systemd-logind[1963]: Removed session 28. Nov 24 00:35:57.671997 kubelet[3328]: E1124 00:35:57.671573 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-48vsh" podUID="96ab7330-0514-4b4d-8ac0-0b3305cdbb91" Nov 24 00:35:57.671997 kubelet[3328]: E1124 00:35:57.671951 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68b9c8d87-7ndft" podUID="6d264742-bc12-4821-8aea-351233494ad9" Nov 24 00:35:57.680313 containerd[1991]: time="2025-11-24T00:35:57.680215401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:35:57.949815 containerd[1991]: time="2025-11-24T00:35:57.949659042Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:35:57.952472 containerd[1991]: time="2025-11-24T00:35:57.951964212Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:35:57.952472 containerd[1991]: time="2025-11-24T00:35:57.952176163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:35:57.952693 kubelet[3328]: E1124 00:35:57.952392 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:35:57.952693 kubelet[3328]: E1124 00:35:57.952503 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:35:57.952842 kubelet[3328]: E1124 00:35:57.952746 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dw5xd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-744x5_calico-system(b03eefe9-3009-42ea-814c-37b36b40aa2b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:35:57.953948 kubelet[3328]: E1124 00:35:57.953895 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-744x5" podUID="b03eefe9-3009-42ea-814c-37b36b40aa2b" Nov 24 00:35:58.670791 containerd[1991]: time="2025-11-24T00:35:58.670497010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:35:58.982178 containerd[1991]: time="2025-11-24T00:35:58.971589347Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:35:58.984400 containerd[1991]: time="2025-11-24T00:35:58.984290636Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:35:58.984637 containerd[1991]: time="2025-11-24T00:35:58.984322815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:35:58.984967 kubelet[3328]: E1124 00:35:58.984922 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:35:58.985700 kubelet[3328]: E1124 00:35:58.984971 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:35:58.985700 kubelet[3328]: E1124 00:35:58.985076 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b38025872b7d4fa89ea0f1fb92dc0334,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f6n9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84bf774ffb-kpf6d_calico-system(6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:35:58.988244 containerd[1991]: time="2025-11-24T00:35:58.988193750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:35:59.306832 containerd[1991]: time="2025-11-24T00:35:59.305936121Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:35:59.308141 containerd[1991]: time="2025-11-24T00:35:59.308077085Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:35:59.308403 containerd[1991]: time="2025-11-24T00:35:59.308109870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:35:59.308512 kubelet[3328]: E1124 00:35:59.308300 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:35:59.308512 kubelet[3328]: E1124 00:35:59.308350 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:35:59.308642 kubelet[3328]: E1124 00:35:59.308518 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f6n9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84bf774ffb-kpf6d_calico-system(6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:35:59.309778 kubelet[3328]: E1124 00:35:59.309729 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84bf774ffb-kpf6d" podUID="6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef" Nov 24 00:35:59.451074 update_engine[1966]: I20251124 00:35:59.450976 1966 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 24 00:35:59.451576 update_engine[1966]: I20251124 00:35:59.451086 1966 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 24 00:35:59.452247 update_engine[1966]: I20251124 00:35:59.452143 1966 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 24 00:35:59.454779 update_engine[1966]: E20251124 00:35:59.454725 1966 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 24 00:35:59.455871 update_engine[1966]: I20251124 00:35:59.454818 1966 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 24 00:35:59.455871 update_engine[1966]: I20251124 00:35:59.454828 1966 omaha_request_action.cc:617] Omaha request response: Nov 24 00:35:59.455871 update_engine[1966]: E20251124 00:35:59.455725 1966 omaha_request_action.cc:636] Omaha request network transfer failed. Nov 24 00:35:59.455871 update_engine[1966]: I20251124 00:35:59.455775 1966 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Nov 24 00:35:59.455871 update_engine[1966]: I20251124 00:35:59.455785 1966 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 24 00:35:59.455871 update_engine[1966]: I20251124 00:35:59.455792 1966 update_attempter.cc:306] Processing Done. Nov 24 00:35:59.455871 update_engine[1966]: E20251124 00:35:59.455811 1966 update_attempter.cc:619] Update failed. Nov 24 00:35:59.455871 update_engine[1966]: I20251124 00:35:59.455818 1966 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Nov 24 00:35:59.455871 update_engine[1966]: I20251124 00:35:59.455825 1966 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Nov 24 00:35:59.455871 update_engine[1966]: I20251124 00:35:59.455833 1966 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Nov 24 00:35:59.456795 update_engine[1966]: I20251124 00:35:59.455929 1966 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 24 00:35:59.456795 update_engine[1966]: I20251124 00:35:59.456013 1966 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 24 00:35:59.456795 update_engine[1966]: I20251124 00:35:59.456025 1966 omaha_request_action.cc:272] Request: Nov 24 00:35:59.456795 update_engine[1966]: Nov 24 00:35:59.456795 update_engine[1966]: Nov 24 00:35:59.456795 update_engine[1966]: Nov 24 00:35:59.456795 update_engine[1966]: Nov 24 00:35:59.456795 update_engine[1966]: Nov 24 00:35:59.456795 update_engine[1966]: Nov 24 00:35:59.456795 update_engine[1966]: I20251124 00:35:59.456034 1966 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 24 00:35:59.456795 update_engine[1966]: I20251124 00:35:59.456089 1966 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 24 00:35:59.457314 locksmithd[2021]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Nov 24 00:35:59.457688 update_engine[1966]: I20251124 00:35:59.456808 1966 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 24 00:35:59.458148 update_engine[1966]: E20251124 00:35:59.458110 1966 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 24 00:35:59.458237 update_engine[1966]: I20251124 00:35:59.458204 1966 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 24 00:35:59.458237 update_engine[1966]: I20251124 00:35:59.458230 1966 omaha_request_action.cc:617] Omaha request response: Nov 24 00:35:59.458317 update_engine[1966]: I20251124 00:35:59.458240 1966 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 24 00:35:59.458317 update_engine[1966]: I20251124 00:35:59.458247 1966 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 24 00:35:59.458317 update_engine[1966]: I20251124 00:35:59.458254 1966 update_attempter.cc:306] Processing Done. Nov 24 00:35:59.458317 update_engine[1966]: I20251124 00:35:59.458263 1966 update_attempter.cc:310] Error event sent. Nov 24 00:35:59.458317 update_engine[1966]: I20251124 00:35:59.458276 1966 update_check_scheduler.cc:74] Next update check in 42m32s Nov 24 00:35:59.458709 locksmithd[2021]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Nov 24 00:36:02.675021 containerd[1991]: time="2025-11-24T00:36:02.674964100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:36:02.940361 containerd[1991]: time="2025-11-24T00:36:02.940174284Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:36:02.942659 containerd[1991]: time="2025-11-24T00:36:02.942586479Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:36:02.942659 containerd[1991]: time="2025-11-24T00:36:02.942615882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:36:02.942938 kubelet[3328]: E1124 00:36:02.942885 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:36:02.943646 kubelet[3328]: E1124 00:36:02.942952 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:36:02.943646 kubelet[3328]: E1124 00:36:02.943136 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sptth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6594b85c5f-dnbvv_calico-apiserver(3787437e-985f-4539-b2f2-cf4084ce8482): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:36:02.944527 kubelet[3328]: E1124 00:36:02.944436 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594b85c5f-dnbvv" podUID="3787437e-985f-4539-b2f2-cf4084ce8482" Nov 24 00:36:03.678379 containerd[1991]: time="2025-11-24T00:36:03.677840263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:36:04.006218 containerd[1991]: time="2025-11-24T00:36:04.006153505Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:36:04.009915 containerd[1991]: time="2025-11-24T00:36:04.009420561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:36:04.009915 containerd[1991]: time="2025-11-24T00:36:04.009423558Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:36:04.010138 kubelet[3328]: E1124 00:36:04.010036 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:36:04.010138 kubelet[3328]: E1124 00:36:04.010109 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:36:04.011283 kubelet[3328]: E1124 00:36:04.010282 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gxvrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6594b85c5f-5gwgt_calico-apiserver(e9762179-b934-433c-b377-36b45fd610b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:36:04.015906 kubelet[3328]: E1124 00:36:04.014814 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594b85c5f-5gwgt" podUID="e9762179-b934-433c-b377-36b45fd610b8" Nov 24 00:36:07.813109 systemd[1]: cri-containerd-b8e6f58675c3f3c078529bc016c3cc086953f0a4305106134ec3e711e107eccf.scope: Deactivated successfully. Nov 24 00:36:07.814702 systemd[1]: cri-containerd-b8e6f58675c3f3c078529bc016c3cc086953f0a4305106134ec3e711e107eccf.scope: Consumed 4.012s CPU time, 85M memory peak, 59.3M read from disk. Nov 24 00:36:07.980002 containerd[1991]: time="2025-11-24T00:36:07.979955384Z" level=info msg="received container exit event container_id:\"b8e6f58675c3f3c078529bc016c3cc086953f0a4305106134ec3e711e107eccf\" id:\"b8e6f58675c3f3c078529bc016c3cc086953f0a4305106134ec3e711e107eccf\" pid:3156 exit_status:1 exited_at:{seconds:1763944567 nanos:835213493}" Nov 24 00:36:08.051313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8e6f58675c3f3c078529bc016c3cc086953f0a4305106134ec3e711e107eccf-rootfs.mount: Deactivated successfully. Nov 24 00:36:08.178358 systemd[1]: cri-containerd-7cba9fd52567e7135a7008cee589361b7d132caef6976a8d5a4e780c61c1d698.scope: Deactivated successfully. Nov 24 00:36:08.178748 systemd[1]: cri-containerd-7cba9fd52567e7135a7008cee589361b7d132caef6976a8d5a4e780c61c1d698.scope: Consumed 13.580s CPU time, 109.8M memory peak, 39.7M read from disk. Nov 24 00:36:08.183127 containerd[1991]: time="2025-11-24T00:36:08.183056619Z" level=info msg="received container exit event container_id:\"7cba9fd52567e7135a7008cee589361b7d132caef6976a8d5a4e780c61c1d698\" id:\"7cba9fd52567e7135a7008cee589361b7d132caef6976a8d5a4e780c61c1d698\" pid:3914 exit_status:1 exited_at:{seconds:1763944568 nanos:182105424}" Nov 24 00:36:08.216297 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cba9fd52567e7135a7008cee589361b7d132caef6976a8d5a4e780c61c1d698-rootfs.mount: Deactivated successfully. Nov 24 00:36:08.853182 kubelet[3328]: E1124 00:36:08.853099 3328 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-18?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 00:36:09.149226 kubelet[3328]: I1124 00:36:09.148932 3328 scope.go:117] "RemoveContainer" containerID="7cba9fd52567e7135a7008cee589361b7d132caef6976a8d5a4e780c61c1d698" Nov 24 00:36:09.149439 kubelet[3328]: I1124 00:36:09.149367 3328 scope.go:117] "RemoveContainer" containerID="b8e6f58675c3f3c078529bc016c3cc086953f0a4305106134ec3e711e107eccf" Nov 24 00:36:09.152241 containerd[1991]: time="2025-11-24T00:36:09.152104366Z" level=info msg="CreateContainer within sandbox \"d700776aa0c8d87f7c48622f003cefefa7ac9bb2b23639553b40cb7852dba4f7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 24 00:36:09.153233 containerd[1991]: time="2025-11-24T00:36:09.152104339Z" level=info msg="CreateContainer within sandbox \"b9c3d2c40fe5582557bfffa780d21e51b157fe7b7e68582b3a85f21b02a669ed\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 24 00:36:09.234489 containerd[1991]: time="2025-11-24T00:36:09.234115340Z" level=info msg="Container bf9d4ef849f9045ec202fc33544367a31a69daa580e31b5da4b0c74abcb7a466: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:36:09.252477 containerd[1991]: time="2025-11-24T00:36:09.251992893Z" level=info msg="Container a1586ffc5cdcab64bcce3343cb286e3bb2e8e2ef6d633fcd405e319b5520ca3a: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:36:09.294481 containerd[1991]: time="2025-11-24T00:36:09.294424254Z" level=info msg="CreateContainer within sandbox \"d700776aa0c8d87f7c48622f003cefefa7ac9bb2b23639553b40cb7852dba4f7\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"bf9d4ef849f9045ec202fc33544367a31a69daa580e31b5da4b0c74abcb7a466\"" Nov 24 00:36:09.294977 containerd[1991]: time="2025-11-24T00:36:09.294934029Z" level=info msg="StartContainer for \"bf9d4ef849f9045ec202fc33544367a31a69daa580e31b5da4b0c74abcb7a466\"" Nov 24 00:36:09.305361 containerd[1991]: time="2025-11-24T00:36:09.305311133Z" level=info msg="connecting to shim bf9d4ef849f9045ec202fc33544367a31a69daa580e31b5da4b0c74abcb7a466" address="unix:///run/containerd/s/1eb6a4323d03d4838111bf8a3738f700b4171dc26d4a84e35c9f248c5a6c642b" protocol=ttrpc version=3 Nov 24 00:36:09.311484 containerd[1991]: time="2025-11-24T00:36:09.310981041Z" level=info msg="CreateContainer within sandbox \"b9c3d2c40fe5582557bfffa780d21e51b157fe7b7e68582b3a85f21b02a669ed\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a1586ffc5cdcab64bcce3343cb286e3bb2e8e2ef6d633fcd405e319b5520ca3a\"" Nov 24 00:36:09.325986 containerd[1991]: time="2025-11-24T00:36:09.325933821Z" level=info msg="StartContainer for \"a1586ffc5cdcab64bcce3343cb286e3bb2e8e2ef6d633fcd405e319b5520ca3a\"" Nov 24 00:36:09.331289 containerd[1991]: time="2025-11-24T00:36:09.331243884Z" level=info msg="connecting to shim a1586ffc5cdcab64bcce3343cb286e3bb2e8e2ef6d633fcd405e319b5520ca3a" address="unix:///run/containerd/s/41d8308cfbe50d144b4a72bbb83e6ffa9a3b100aff59b339a31df4c5c6d72be1" protocol=ttrpc version=3 Nov 24 00:36:09.339690 systemd[1]: Started cri-containerd-bf9d4ef849f9045ec202fc33544367a31a69daa580e31b5da4b0c74abcb7a466.scope - libcontainer container bf9d4ef849f9045ec202fc33544367a31a69daa580e31b5da4b0c74abcb7a466. Nov 24 00:36:09.382723 systemd[1]: Started cri-containerd-a1586ffc5cdcab64bcce3343cb286e3bb2e8e2ef6d633fcd405e319b5520ca3a.scope - libcontainer container a1586ffc5cdcab64bcce3343cb286e3bb2e8e2ef6d633fcd405e319b5520ca3a. Nov 24 00:36:09.446941 containerd[1991]: time="2025-11-24T00:36:09.446806719Z" level=info msg="StartContainer for \"bf9d4ef849f9045ec202fc33544367a31a69daa580e31b5da4b0c74abcb7a466\" returns successfully" Nov 24 00:36:09.516460 containerd[1991]: time="2025-11-24T00:36:09.516302022Z" level=info msg="StartContainer for \"a1586ffc5cdcab64bcce3343cb286e3bb2e8e2ef6d633fcd405e319b5520ca3a\" returns successfully" Nov 24 00:36:09.671194 containerd[1991]: time="2025-11-24T00:36:09.671147634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:36:09.958503 containerd[1991]: time="2025-11-24T00:36:09.958102734Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:36:09.961638 containerd[1991]: time="2025-11-24T00:36:09.961421031Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:36:09.961638 containerd[1991]: time="2025-11-24T00:36:09.961427250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:36:09.962495 kubelet[3328]: E1124 00:36:09.961849 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:36:09.962495 kubelet[3328]: E1124 00:36:09.961933 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:36:09.962495 kubelet[3328]: E1124 00:36:09.962164 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9whk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-48vsh_calico-system(96ab7330-0514-4b4d-8ac0-0b3305cdbb91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:36:09.964589 containerd[1991]: time="2025-11-24T00:36:09.964550502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:36:10.226708 containerd[1991]: time="2025-11-24T00:36:10.226663499Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:36:10.229005 containerd[1991]: time="2025-11-24T00:36:10.228899345Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:36:10.230762 containerd[1991]: time="2025-11-24T00:36:10.228966780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:36:10.230863 kubelet[3328]: E1124 00:36:10.229407 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:36:10.230863 kubelet[3328]: E1124 00:36:10.229467 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:36:10.230863 kubelet[3328]: E1124 00:36:10.229635 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9whk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-48vsh_calico-system(96ab7330-0514-4b4d-8ac0-0b3305cdbb91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:36:10.231330 kubelet[3328]: E1124 00:36:10.231264 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-48vsh" podUID="96ab7330-0514-4b4d-8ac0-0b3305cdbb91" Nov 24 00:36:11.670886 containerd[1991]: time="2025-11-24T00:36:11.670831383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:36:11.741688 systemd[1]: cri-containerd-7295b8cd26bbbdf10fffa77d35133d3496cfebaae3abdc2a2704558aba969260.scope: Deactivated successfully. Nov 24 00:36:11.742065 systemd[1]: cri-containerd-7295b8cd26bbbdf10fffa77d35133d3496cfebaae3abdc2a2704558aba969260.scope: Consumed 3.951s CPU time, 40.9M memory peak, 38.6M read from disk. Nov 24 00:36:11.745403 containerd[1991]: time="2025-11-24T00:36:11.745364023Z" level=info msg="received container exit event container_id:\"7295b8cd26bbbdf10fffa77d35133d3496cfebaae3abdc2a2704558aba969260\" id:\"7295b8cd26bbbdf10fffa77d35133d3496cfebaae3abdc2a2704558aba969260\" pid:3149 exit_status:1 exited_at:{seconds:1763944571 nanos:744727075}" Nov 24 00:36:11.779639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7295b8cd26bbbdf10fffa77d35133d3496cfebaae3abdc2a2704558aba969260-rootfs.mount: Deactivated successfully. Nov 24 00:36:11.933107 containerd[1991]: time="2025-11-24T00:36:11.932965429Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:36:11.935233 containerd[1991]: time="2025-11-24T00:36:11.935087848Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:36:11.935413 containerd[1991]: time="2025-11-24T00:36:11.935272006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:36:11.935557 kubelet[3328]: E1124 00:36:11.935496 3328 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:36:11.936120 kubelet[3328]: E1124 00:36:11.935565 3328 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:36:11.936120 kubelet[3328]: E1124 00:36:11.935956 3328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-769nd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68b9c8d87-7ndft_calico-system(6d264742-bc12-4821-8aea-351233494ad9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:36:11.938304 kubelet[3328]: E1124 00:36:11.938253 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68b9c8d87-7ndft" podUID="6d264742-bc12-4821-8aea-351233494ad9" Nov 24 00:36:12.176857 kubelet[3328]: I1124 00:36:12.176824 3328 scope.go:117] "RemoveContainer" containerID="7295b8cd26bbbdf10fffa77d35133d3496cfebaae3abdc2a2704558aba969260" Nov 24 00:36:12.180298 containerd[1991]: time="2025-11-24T00:36:12.179924032Z" level=info msg="CreateContainer within sandbox \"370651d704632d2a7e5798469a04563836bb1bfa36dd146d664d9c1ae5e0ed9c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 24 00:36:12.205808 containerd[1991]: time="2025-11-24T00:36:12.205689450Z" level=info msg="Container a4822f2f0ce04d8acf8ea2d84fc583ee6082543113484ab6ef8b0f0de32724cc: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:36:12.231833 containerd[1991]: time="2025-11-24T00:36:12.231547007Z" level=info msg="CreateContainer within sandbox \"370651d704632d2a7e5798469a04563836bb1bfa36dd146d664d9c1ae5e0ed9c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a4822f2f0ce04d8acf8ea2d84fc583ee6082543113484ab6ef8b0f0de32724cc\"" Nov 24 00:36:12.232423 containerd[1991]: time="2025-11-24T00:36:12.232391009Z" level=info msg="StartContainer for \"a4822f2f0ce04d8acf8ea2d84fc583ee6082543113484ab6ef8b0f0de32724cc\"" Nov 24 00:36:12.234711 containerd[1991]: time="2025-11-24T00:36:12.234673602Z" level=info msg="connecting to shim a4822f2f0ce04d8acf8ea2d84fc583ee6082543113484ab6ef8b0f0de32724cc" address="unix:///run/containerd/s/ecd351aa0502cd1df828bee234b23ca97ede858165f861794db925031c46c057" protocol=ttrpc version=3 Nov 24 00:36:12.266719 systemd[1]: Started cri-containerd-a4822f2f0ce04d8acf8ea2d84fc583ee6082543113484ab6ef8b0f0de32724cc.scope - libcontainer container a4822f2f0ce04d8acf8ea2d84fc583ee6082543113484ab6ef8b0f0de32724cc. Nov 24 00:36:12.330138 containerd[1991]: time="2025-11-24T00:36:12.330098150Z" level=info msg="StartContainer for \"a4822f2f0ce04d8acf8ea2d84fc583ee6082543113484ab6ef8b0f0de32724cc\" returns successfully" Nov 24 00:36:12.719473 kubelet[3328]: E1124 00:36:12.717042 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84bf774ffb-kpf6d" podUID="6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef" Nov 24 00:36:13.670472 kubelet[3328]: E1124 00:36:13.670285 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-744x5" podUID="b03eefe9-3009-42ea-814c-37b36b40aa2b" Nov 24 00:36:14.671189 kubelet[3328]: E1124 00:36:14.671060 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594b85c5f-dnbvv" podUID="3787437e-985f-4539-b2f2-cf4084ce8482" Nov 24 00:36:17.675543 kubelet[3328]: E1124 00:36:17.675490 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6594b85c5f-5gwgt" podUID="e9762179-b934-433c-b377-36b45fd610b8" Nov 24 00:36:18.854935 kubelet[3328]: E1124 00:36:18.854403 3328 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-18?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 00:36:21.361432 systemd[1]: cri-containerd-bf9d4ef849f9045ec202fc33544367a31a69daa580e31b5da4b0c74abcb7a466.scope: Deactivated successfully. Nov 24 00:36:21.362907 containerd[1991]: time="2025-11-24T00:36:21.362054123Z" level=info msg="received container exit event container_id:\"bf9d4ef849f9045ec202fc33544367a31a69daa580e31b5da4b0c74abcb7a466\" id:\"bf9d4ef849f9045ec202fc33544367a31a69daa580e31b5da4b0c74abcb7a466\" pid:6040 exit_status:1 exited_at:{seconds:1763944581 nanos:361814726}" Nov 24 00:36:21.362237 systemd[1]: cri-containerd-bf9d4ef849f9045ec202fc33544367a31a69daa580e31b5da4b0c74abcb7a466.scope: Consumed 387ms CPU time, 69M memory peak, 30.3M read from disk. Nov 24 00:36:21.391380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf9d4ef849f9045ec202fc33544367a31a69daa580e31b5da4b0c74abcb7a466-rootfs.mount: Deactivated successfully. Nov 24 00:36:22.227923 kubelet[3328]: I1124 00:36:22.227848 3328 scope.go:117] "RemoveContainer" containerID="7cba9fd52567e7135a7008cee589361b7d132caef6976a8d5a4e780c61c1d698" Nov 24 00:36:22.228439 kubelet[3328]: I1124 00:36:22.227982 3328 scope.go:117] "RemoveContainer" containerID="bf9d4ef849f9045ec202fc33544367a31a69daa580e31b5da4b0c74abcb7a466" Nov 24 00:36:22.238655 kubelet[3328]: E1124 00:36:22.238602 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-rltg6_tigera-operator(e4d79e30-c62d-4a6e-a66e-1610f16cfa49)\"" pod="tigera-operator/tigera-operator-7dcd859c48-rltg6" podUID="e4d79e30-c62d-4a6e-a66e-1610f16cfa49" Nov 24 00:36:22.273653 containerd[1991]: time="2025-11-24T00:36:22.273590988Z" level=info msg="RemoveContainer for \"7cba9fd52567e7135a7008cee589361b7d132caef6976a8d5a4e780c61c1d698\"" Nov 24 00:36:22.291907 containerd[1991]: time="2025-11-24T00:36:22.291861305Z" level=info msg="RemoveContainer for \"7cba9fd52567e7135a7008cee589361b7d132caef6976a8d5a4e780c61c1d698\" returns successfully" Nov 24 00:36:24.671293 kubelet[3328]: E1124 00:36:24.671248 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68b9c8d87-7ndft" podUID="6d264742-bc12-4821-8aea-351233494ad9" Nov 24 00:36:24.672066 kubelet[3328]: E1124 00:36:24.672020 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-48vsh" podUID="96ab7330-0514-4b4d-8ac0-0b3305cdbb91" Nov 24 00:36:25.671111 kubelet[3328]: E1124 00:36:25.671058 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84bf774ffb-kpf6d" podUID="6d71726d-c6c6-4b4a-9ff1-13f1cf35cfef" Nov 24 00:36:26.670970 kubelet[3328]: E1124 00:36:26.670884 3328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-744x5" podUID="b03eefe9-3009-42ea-814c-37b36b40aa2b"