Nov 5 16:01:20.027116 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 13:45:21 -00 2025 Nov 5 16:01:20.027143 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 16:01:20.027156 kernel: BIOS-provided physical RAM map: Nov 5 16:01:20.027163 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 5 16:01:20.027170 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Nov 5 16:01:20.027177 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Nov 5 16:01:20.027186 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Nov 5 16:01:20.027193 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Nov 5 16:01:20.027201 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Nov 5 16:01:20.027208 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Nov 5 16:01:20.027218 kernel: NX (Execute Disable) protection: active Nov 5 16:01:20.027225 kernel: APIC: Static calls initialized Nov 5 16:01:20.027233 kernel: e820: update [mem 0x768bf018-0x768c7e57] usable ==> usable Nov 5 16:01:20.027240 kernel: extended physical RAM map: Nov 5 16:01:20.027268 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 5 16:01:20.027280 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768bf017] usable Nov 5 16:01:20.027288 kernel: reserve setup_data: [mem 0x00000000768bf018-0x00000000768c7e57] usable Nov 5 16:01:20.027296 kernel: reserve setup_data: [mem 0x00000000768c7e58-0x00000000786cdfff] usable Nov 5 16:01:20.027305 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Nov 5 16:01:20.027313 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Nov 5 16:01:20.027321 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Nov 5 16:01:20.027329 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Nov 5 16:01:20.027338 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Nov 5 16:01:20.027346 kernel: efi: EFI v2.7 by EDK II Nov 5 16:01:20.027356 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77002518 Nov 5 16:01:20.027364 kernel: secureboot: Secure boot disabled Nov 5 16:01:20.027373 kernel: SMBIOS 2.7 present. Nov 5 16:01:20.027381 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Nov 5 16:01:20.027389 kernel: DMI: Memory slots populated: 1/1 Nov 5 16:01:20.027397 kernel: Hypervisor detected: KVM Nov 5 16:01:20.027405 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Nov 5 16:01:20.027413 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 5 16:01:20.027421 kernel: kvm-clock: using sched offset of 6287516439 cycles Nov 5 16:01:20.027430 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 5 16:01:20.027439 kernel: tsc: Detected 2499.996 MHz processor Nov 5 16:01:20.027450 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 5 16:01:20.027459 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 5 16:01:20.027468 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Nov 5 16:01:20.027476 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 5 16:01:20.027485 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 5 16:01:20.027497 kernel: Using GB pages for direct mapping Nov 5 16:01:20.027508 kernel: ACPI: Early table checksum verification disabled Nov 5 16:01:20.027517 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Nov 5 16:01:20.027526 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Nov 5 16:01:20.027535 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 5 16:01:20.027544 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Nov 5 16:01:20.027555 kernel: ACPI: FACS 0x00000000789D0000 000040 Nov 5 16:01:20.027564 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Nov 5 16:01:20.027573 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 5 16:01:20.027582 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 5 16:01:20.027591 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Nov 5 16:01:20.027600 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Nov 5 16:01:20.027609 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 5 16:01:20.027620 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 5 16:01:20.027629 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Nov 5 16:01:20.027638 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Nov 5 16:01:20.027647 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Nov 5 16:01:20.027656 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Nov 5 16:01:20.027665 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Nov 5 16:01:20.027674 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Nov 5 16:01:20.027685 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Nov 5 16:01:20.027693 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Nov 5 16:01:20.027703 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Nov 5 16:01:20.027711 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Nov 5 16:01:20.027720 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Nov 5 16:01:20.027729 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Nov 5 16:01:20.027738 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Nov 5 16:01:20.027747 kernel: NUMA: Initialized distance table, cnt=1 Nov 5 16:01:20.027758 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Nov 5 16:01:20.027767 kernel: Zone ranges: Nov 5 16:01:20.027776 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 5 16:01:20.027784 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Nov 5 16:01:20.027793 kernel: Normal empty Nov 5 16:01:20.027802 kernel: Device empty Nov 5 16:01:20.027811 kernel: Movable zone start for each node Nov 5 16:01:20.027822 kernel: Early memory node ranges Nov 5 16:01:20.027831 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 5 16:01:20.027840 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Nov 5 16:01:20.027849 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Nov 5 16:01:20.027858 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Nov 5 16:01:20.027867 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 16:01:20.027875 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 5 16:01:20.027884 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 5 16:01:20.027896 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Nov 5 16:01:20.027905 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 5 16:01:20.027914 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 5 16:01:20.027923 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Nov 5 16:01:20.027932 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 5 16:01:20.027941 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 5 16:01:20.027950 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 5 16:01:20.027961 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 5 16:01:20.027970 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 5 16:01:20.027979 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 5 16:01:20.027988 kernel: TSC deadline timer available Nov 5 16:01:20.027997 kernel: CPU topo: Max. logical packages: 1 Nov 5 16:01:20.028006 kernel: CPU topo: Max. logical dies: 1 Nov 5 16:01:20.028015 kernel: CPU topo: Max. dies per package: 1 Nov 5 16:01:20.028025 kernel: CPU topo: Max. threads per core: 2 Nov 5 16:01:20.028035 kernel: CPU topo: Num. cores per package: 1 Nov 5 16:01:20.028043 kernel: CPU topo: Num. threads per package: 2 Nov 5 16:01:20.028052 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 5 16:01:20.028061 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 5 16:01:20.028070 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Nov 5 16:01:20.028079 kernel: Booting paravirtualized kernel on KVM Nov 5 16:01:20.028089 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 5 16:01:20.028100 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 5 16:01:20.028109 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 5 16:01:20.028118 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 5 16:01:20.028127 kernel: pcpu-alloc: [0] 0 1 Nov 5 16:01:20.028136 kernel: kvm-guest: PV spinlocks enabled Nov 5 16:01:20.028145 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 5 16:01:20.028156 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 16:01:20.028167 kernel: random: crng init done Nov 5 16:01:20.028176 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 16:01:20.028185 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 5 16:01:20.028195 kernel: Fallback order for Node 0: 0 Nov 5 16:01:20.028204 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Nov 5 16:01:20.028213 kernel: Policy zone: DMA32 Nov 5 16:01:20.028232 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 16:01:20.028241 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 5 16:01:20.028260 kernel: Kernel/User page tables isolation: enabled Nov 5 16:01:20.028273 kernel: ftrace: allocating 40092 entries in 157 pages Nov 5 16:01:20.028282 kernel: ftrace: allocated 157 pages with 5 groups Nov 5 16:01:20.028291 kernel: Dynamic Preempt: voluntary Nov 5 16:01:20.028301 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 16:01:20.028316 kernel: rcu: RCU event tracing is enabled. Nov 5 16:01:20.028325 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 5 16:01:20.028335 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 16:01:20.028347 kernel: Rude variant of Tasks RCU enabled. Nov 5 16:01:20.028357 kernel: Tracing variant of Tasks RCU enabled. Nov 5 16:01:20.028366 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 16:01:20.028375 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 5 16:01:20.028387 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 16:01:20.028397 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 16:01:20.028406 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 16:01:20.028416 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 5 16:01:20.028426 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 16:01:20.028435 kernel: Console: colour dummy device 80x25 Nov 5 16:01:20.028445 kernel: printk: legacy console [tty0] enabled Nov 5 16:01:20.028457 kernel: printk: legacy console [ttyS0] enabled Nov 5 16:01:20.028467 kernel: ACPI: Core revision 20240827 Nov 5 16:01:20.028476 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Nov 5 16:01:20.028486 kernel: APIC: Switch to symmetric I/O mode setup Nov 5 16:01:20.028495 kernel: x2apic enabled Nov 5 16:01:20.028505 kernel: APIC: Switched APIC routing to: physical x2apic Nov 5 16:01:20.028515 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Nov 5 16:01:20.028524 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Nov 5 16:01:20.028536 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 5 16:01:20.028546 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 5 16:01:20.028555 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 5 16:01:20.028564 kernel: Spectre V2 : Mitigation: Retpolines Nov 5 16:01:20.028573 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 5 16:01:20.028582 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 5 16:01:20.028592 kernel: RETBleed: Vulnerable Nov 5 16:01:20.028601 kernel: Speculative Store Bypass: Vulnerable Nov 5 16:01:20.028610 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Nov 5 16:01:20.028621 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 5 16:01:20.028630 kernel: GDS: Unknown: Dependent on hypervisor status Nov 5 16:01:20.028639 kernel: active return thunk: its_return_thunk Nov 5 16:01:20.028649 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 5 16:01:20.028658 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 5 16:01:20.028667 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 5 16:01:20.028676 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 5 16:01:20.028686 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 5 16:01:20.028695 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 5 16:01:20.028706 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 5 16:01:20.028715 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 5 16:01:20.028724 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 5 16:01:20.028734 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 5 16:01:20.028743 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 5 16:01:20.028752 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 5 16:01:20.028761 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 5 16:01:20.028770 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Nov 5 16:01:20.028779 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Nov 5 16:01:20.028788 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Nov 5 16:01:20.028798 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Nov 5 16:01:20.028809 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Nov 5 16:01:20.028819 kernel: Freeing SMP alternatives memory: 32K Nov 5 16:01:20.028828 kernel: pid_max: default: 32768 minimum: 301 Nov 5 16:01:20.028837 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 16:01:20.028846 kernel: landlock: Up and running. Nov 5 16:01:20.028855 kernel: SELinux: Initializing. Nov 5 16:01:20.028865 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 5 16:01:20.028874 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 5 16:01:20.028884 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 5 16:01:20.028893 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 5 16:01:20.028903 kernel: signal: max sigframe size: 3632 Nov 5 16:01:20.028915 kernel: rcu: Hierarchical SRCU implementation. Nov 5 16:01:20.028925 kernel: rcu: Max phase no-delay instances is 400. Nov 5 16:01:20.028934 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 16:01:20.028944 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 5 16:01:20.028953 kernel: smp: Bringing up secondary CPUs ... Nov 5 16:01:20.028963 kernel: smpboot: x86: Booting SMP configuration: Nov 5 16:01:20.028972 kernel: .... node #0, CPUs: #1 Nov 5 16:01:20.028985 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 5 16:01:20.028995 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 5 16:01:20.029004 kernel: smp: Brought up 1 node, 2 CPUs Nov 5 16:01:20.029014 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Nov 5 16:01:20.029024 kernel: Memory: 1930580K/2037804K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 102660K reserved, 0K cma-reserved) Nov 5 16:01:20.029033 kernel: devtmpfs: initialized Nov 5 16:01:20.029045 kernel: x86/mm: Memory block size: 128MB Nov 5 16:01:20.029055 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Nov 5 16:01:20.029064 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 16:01:20.029074 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 5 16:01:20.029084 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 16:01:20.029094 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 16:01:20.029103 kernel: audit: initializing netlink subsys (disabled) Nov 5 16:01:20.029115 kernel: audit: type=2000 audit(1762358476.787:1): state=initialized audit_enabled=0 res=1 Nov 5 16:01:20.029124 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 16:01:20.029134 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 5 16:01:20.029143 kernel: cpuidle: using governor menu Nov 5 16:01:20.029153 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 16:01:20.029162 kernel: dca service started, version 1.12.1 Nov 5 16:01:20.029172 kernel: PCI: Using configuration type 1 for base access Nov 5 16:01:20.029182 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 5 16:01:20.029194 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 16:01:20.029203 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 16:01:20.029213 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 16:01:20.029223 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 16:01:20.029232 kernel: ACPI: Added _OSI(Module Device) Nov 5 16:01:20.029242 kernel: ACPI: Added _OSI(Processor Device) Nov 5 16:01:20.029264 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 16:01:20.029276 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 5 16:01:20.029286 kernel: ACPI: Interpreter enabled Nov 5 16:01:20.029296 kernel: ACPI: PM: (supports S0 S5) Nov 5 16:01:20.029305 kernel: ACPI: Using IOAPIC for interrupt routing Nov 5 16:01:20.029315 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 5 16:01:20.029325 kernel: PCI: Using E820 reservations for host bridge windows Nov 5 16:01:20.029335 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 5 16:01:20.029347 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 16:01:20.029555 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 5 16:01:20.029691 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 5 16:01:20.029821 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 5 16:01:20.029833 kernel: acpiphp: Slot [3] registered Nov 5 16:01:20.029847 kernel: acpiphp: Slot [4] registered Nov 5 16:01:20.029856 kernel: acpiphp: Slot [5] registered Nov 5 16:01:20.029866 kernel: acpiphp: Slot [6] registered Nov 5 16:01:20.029875 kernel: acpiphp: Slot [7] registered Nov 5 16:01:20.029884 kernel: acpiphp: Slot [8] registered Nov 5 16:01:20.029893 kernel: acpiphp: Slot [9] registered Nov 5 16:01:20.029903 kernel: acpiphp: Slot [10] registered Nov 5 16:01:20.029912 kernel: acpiphp: Slot [11] registered Nov 5 16:01:20.029924 kernel: acpiphp: Slot [12] registered Nov 5 16:01:20.029934 kernel: acpiphp: Slot [13] registered Nov 5 16:01:20.029943 kernel: acpiphp: Slot [14] registered Nov 5 16:01:20.029953 kernel: acpiphp: Slot [15] registered Nov 5 16:01:20.029962 kernel: acpiphp: Slot [16] registered Nov 5 16:01:20.029972 kernel: acpiphp: Slot [17] registered Nov 5 16:01:20.029981 kernel: acpiphp: Slot [18] registered Nov 5 16:01:20.029994 kernel: acpiphp: Slot [19] registered Nov 5 16:01:20.030003 kernel: acpiphp: Slot [20] registered Nov 5 16:01:20.030012 kernel: acpiphp: Slot [21] registered Nov 5 16:01:20.030022 kernel: acpiphp: Slot [22] registered Nov 5 16:01:20.030031 kernel: acpiphp: Slot [23] registered Nov 5 16:01:20.030040 kernel: acpiphp: Slot [24] registered Nov 5 16:01:20.030050 kernel: acpiphp: Slot [25] registered Nov 5 16:01:20.030059 kernel: acpiphp: Slot [26] registered Nov 5 16:01:20.030071 kernel: acpiphp: Slot [27] registered Nov 5 16:01:20.030081 kernel: acpiphp: Slot [28] registered Nov 5 16:01:20.030090 kernel: acpiphp: Slot [29] registered Nov 5 16:01:20.030100 kernel: acpiphp: Slot [30] registered Nov 5 16:01:20.030109 kernel: acpiphp: Slot [31] registered Nov 5 16:01:20.030119 kernel: PCI host bridge to bus 0000:00 Nov 5 16:01:20.030882 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 5 16:01:20.031043 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 5 16:01:20.031162 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 5 16:01:20.031292 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 5 16:01:20.031410 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Nov 5 16:01:20.031526 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 16:01:20.031681 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Nov 5 16:01:20.031819 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Nov 5 16:01:20.031954 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Nov 5 16:01:20.032085 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 5 16:01:20.032213 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Nov 5 16:01:20.032358 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Nov 5 16:01:20.032485 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Nov 5 16:01:20.032610 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Nov 5 16:01:20.032737 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Nov 5 16:01:20.032862 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Nov 5 16:01:20.032996 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Nov 5 16:01:20.033127 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Nov 5 16:01:20.034084 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Nov 5 16:01:20.034241 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 5 16:01:20.034406 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Nov 5 16:01:20.034540 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Nov 5 16:01:20.034677 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Nov 5 16:01:20.034820 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Nov 5 16:01:20.034844 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 5 16:01:20.034860 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 5 16:01:20.034875 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 5 16:01:20.034890 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 5 16:01:20.034904 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 5 16:01:20.034922 kernel: iommu: Default domain type: Translated Nov 5 16:01:20.034932 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 5 16:01:20.034942 kernel: efivars: Registered efivars operations Nov 5 16:01:20.034952 kernel: PCI: Using ACPI for IRQ routing Nov 5 16:01:20.034962 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 5 16:01:20.034971 kernel: e820: reserve RAM buffer [mem 0x768bf018-0x77ffffff] Nov 5 16:01:20.034980 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Nov 5 16:01:20.034989 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Nov 5 16:01:20.035377 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Nov 5 16:01:20.035584 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Nov 5 16:01:20.035780 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 5 16:01:20.035802 kernel: vgaarb: loaded Nov 5 16:01:20.035819 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Nov 5 16:01:20.035835 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Nov 5 16:01:20.035852 kernel: clocksource: Switched to clocksource kvm-clock Nov 5 16:01:20.035874 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 16:01:20.035891 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 16:01:20.035908 kernel: pnp: PnP ACPI init Nov 5 16:01:20.035925 kernel: pnp: PnP ACPI: found 5 devices Nov 5 16:01:20.035941 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 5 16:01:20.035958 kernel: NET: Registered PF_INET protocol family Nov 5 16:01:20.035975 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 16:01:20.035995 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 5 16:01:20.036012 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 16:01:20.036028 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 5 16:01:20.036045 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 5 16:01:20.036061 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 5 16:01:20.036078 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 5 16:01:20.036095 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 5 16:01:20.036115 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 16:01:20.036131 kernel: NET: Registered PF_XDP protocol family Nov 5 16:01:20.036394 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 5 16:01:20.036572 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 5 16:01:20.036745 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 5 16:01:20.036913 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 5 16:01:20.037077 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Nov 5 16:01:20.037287 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 5 16:01:20.037310 kernel: PCI: CLS 0 bytes, default 64 Nov 5 16:01:20.037326 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 5 16:01:20.037343 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Nov 5 16:01:20.037358 kernel: clocksource: Switched to clocksource tsc Nov 5 16:01:20.037374 kernel: Initialise system trusted keyrings Nov 5 16:01:20.037390 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 5 16:01:20.037409 kernel: Key type asymmetric registered Nov 5 16:01:20.037424 kernel: Asymmetric key parser 'x509' registered Nov 5 16:01:20.037440 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 5 16:01:20.037456 kernel: io scheduler mq-deadline registered Nov 5 16:01:20.037472 kernel: io scheduler kyber registered Nov 5 16:01:20.037488 kernel: io scheduler bfq registered Nov 5 16:01:20.037503 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 5 16:01:20.037521 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 16:01:20.037537 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 5 16:01:20.037553 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 5 16:01:20.037568 kernel: i8042: Warning: Keylock active Nov 5 16:01:20.037583 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 5 16:01:20.037599 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 5 16:01:20.037796 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 5 16:01:20.037975 kernel: rtc_cmos 00:00: registered as rtc0 Nov 5 16:01:20.038150 kernel: rtc_cmos 00:00: setting system clock to 2025-11-05T16:01:16 UTC (1762358476) Nov 5 16:01:20.038333 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 5 16:01:20.038377 kernel: intel_pstate: CPU model not supported Nov 5 16:01:20.038396 kernel: efifb: probing for efifb Nov 5 16:01:20.038412 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Nov 5 16:01:20.038433 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Nov 5 16:01:20.038450 kernel: efifb: scrolling: redraw Nov 5 16:01:20.038466 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 5 16:01:20.038483 kernel: Console: switching to colour frame buffer device 100x37 Nov 5 16:01:20.038500 kernel: fb0: EFI VGA frame buffer device Nov 5 16:01:20.038517 kernel: pstore: Using crash dump compression: deflate Nov 5 16:01:20.038535 kernel: pstore: Registered efi_pstore as persistent store backend Nov 5 16:01:20.038554 kernel: NET: Registered PF_INET6 protocol family Nov 5 16:01:20.038570 kernel: Segment Routing with IPv6 Nov 5 16:01:20.038587 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 16:01:20.038604 kernel: NET: Registered PF_PACKET protocol family Nov 5 16:01:20.038620 kernel: Key type dns_resolver registered Nov 5 16:01:20.038636 kernel: IPI shorthand broadcast: enabled Nov 5 16:01:20.038653 kernel: sched_clock: Marking stable (825003955, 156068921)->(1061589517, -80516641) Nov 5 16:01:20.038672 kernel: registered taskstats version 1 Nov 5 16:01:20.038688 kernel: Loading compiled-in X.509 certificates Nov 5 16:01:20.038703 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 9f02cc8d588ce542f03b0da66dde47a90a145382' Nov 5 16:01:20.038717 kernel: Demotion targets for Node 0: null Nov 5 16:01:20.038733 kernel: Key type .fscrypt registered Nov 5 16:01:20.038748 kernel: Key type fscrypt-provisioning registered Nov 5 16:01:20.038763 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 16:01:20.038778 kernel: ima: Allocated hash algorithm: sha1 Nov 5 16:01:20.038797 kernel: ima: No architecture policies found Nov 5 16:01:20.038812 kernel: clk: Disabling unused clocks Nov 5 16:01:20.038929 kernel: Freeing unused kernel image (initmem) memory: 15964K Nov 5 16:01:20.038950 kernel: Write protecting the kernel read-only data: 40960k Nov 5 16:01:20.038975 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 5 16:01:20.038993 kernel: Run /init as init process Nov 5 16:01:20.039012 kernel: with arguments: Nov 5 16:01:20.039031 kernel: /init Nov 5 16:01:20.039049 kernel: with environment: Nov 5 16:01:20.039067 kernel: HOME=/ Nov 5 16:01:20.039086 kernel: TERM=linux Nov 5 16:01:20.039301 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 5 16:01:20.039333 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 5 16:01:20.039483 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 5 16:01:20.039508 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 16:01:20.039527 kernel: GPT:25804799 != 33554431 Nov 5 16:01:20.039546 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 16:01:20.039569 kernel: GPT:25804799 != 33554431 Nov 5 16:01:20.039591 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 16:01:20.039609 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 5 16:01:20.039627 kernel: SCSI subsystem initialized Nov 5 16:01:20.039647 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 16:01:20.039665 kernel: device-mapper: uevent: version 1.0.3 Nov 5 16:01:20.039684 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 16:01:20.039706 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 5 16:01:20.039725 kernel: raid6: avx512x4 gen() 17972 MB/s Nov 5 16:01:20.039745 kernel: raid6: avx512x2 gen() 18038 MB/s Nov 5 16:01:20.039762 kernel: raid6: avx512x1 gen() 18017 MB/s Nov 5 16:01:20.039781 kernel: raid6: avx2x4 gen() 17869 MB/s Nov 5 16:01:20.039800 kernel: raid6: avx2x2 gen() 16936 MB/s Nov 5 16:01:20.039818 kernel: raid6: avx2x1 gen() 13825 MB/s Nov 5 16:01:20.039840 kernel: raid6: using algorithm avx512x2 gen() 18038 MB/s Nov 5 16:01:20.039858 kernel: raid6: .... xor() 24337 MB/s, rmw enabled Nov 5 16:01:20.039877 kernel: raid6: using avx512x2 recovery algorithm Nov 5 16:01:20.039896 kernel: xor: automatically using best checksumming function avx Nov 5 16:01:20.039914 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 5 16:01:20.039931 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 16:01:20.039948 kernel: BTRFS: device fsid a4c7be9c-39f6-471d-8a4c-d50144c6bf01 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (151) Nov 5 16:01:20.039969 kernel: BTRFS info (device dm-0): first mount of filesystem a4c7be9c-39f6-471d-8a4c-d50144c6bf01 Nov 5 16:01:20.039987 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 5 16:01:20.040003 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 5 16:01:20.040021 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 16:01:20.040039 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 16:01:20.040056 kernel: loop: module loaded Nov 5 16:01:20.040074 kernel: loop0: detected capacity change from 0 to 100120 Nov 5 16:01:20.040094 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 16:01:20.040114 systemd[1]: Successfully made /usr/ read-only. Nov 5 16:01:20.040142 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 16:01:20.040160 systemd[1]: Detected virtualization amazon. Nov 5 16:01:20.040178 systemd[1]: Detected architecture x86-64. Nov 5 16:01:20.040197 systemd[1]: Running in initrd. Nov 5 16:01:20.040220 systemd[1]: No hostname configured, using default hostname. Nov 5 16:01:20.040240 systemd[1]: Hostname set to . Nov 5 16:01:20.040280 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 16:01:20.040300 systemd[1]: Queued start job for default target initrd.target. Nov 5 16:01:20.040320 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 16:01:20.040340 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 16:01:20.040365 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 16:01:20.040385 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 16:01:20.040406 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 16:01:20.040428 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 16:01:20.040449 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 16:01:20.040468 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 16:01:20.040493 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 16:01:20.040513 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 16:01:20.040533 systemd[1]: Reached target paths.target - Path Units. Nov 5 16:01:20.040552 systemd[1]: Reached target slices.target - Slice Units. Nov 5 16:01:20.040572 systemd[1]: Reached target swap.target - Swaps. Nov 5 16:01:20.040591 systemd[1]: Reached target timers.target - Timer Units. Nov 5 16:01:20.040612 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 16:01:20.040633 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 16:01:20.040651 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 16:01:20.040669 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 16:01:20.040688 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 16:01:20.040706 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 16:01:20.040723 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 16:01:20.040740 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 16:01:20.040763 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 16:01:20.040781 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 16:01:20.040798 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 16:01:20.040816 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 16:01:20.040837 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 16:01:20.040852 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 16:01:20.040874 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 16:01:20.040894 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 16:01:20.040914 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 16:01:20.040935 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 16:01:20.040959 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 16:01:20.040979 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 16:01:20.040999 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 16:01:20.041018 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 16:01:20.041036 kernel: Bridge firewalling registered Nov 5 16:01:20.041053 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 16:01:20.041071 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 16:01:20.041136 systemd-journald[288]: Collecting audit messages is disabled. Nov 5 16:01:20.041175 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 16:01:20.041193 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 16:01:20.041215 systemd-journald[288]: Journal started Nov 5 16:01:20.041249 systemd-journald[288]: Runtime Journal (/run/log/journal/ec23f1513dc5dcbfb8811be8d96c46b9) is 4.7M, max 38.1M, 33.3M free. Nov 5 16:01:20.015041 systemd-modules-load[290]: Inserted module 'br_netfilter' Nov 5 16:01:20.046281 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 16:01:20.054398 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 16:01:20.070854 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 16:01:20.080557 systemd-tmpfiles[308]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 16:01:20.084181 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 16:01:20.091977 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 16:01:20.095081 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 16:01:20.098661 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:01:20.110663 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 16:01:20.139510 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 16:01:20.143325 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 16:01:20.213056 dracut-cmdline[329]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 16:01:20.213935 systemd-resolved[314]: Positive Trust Anchors: Nov 5 16:01:20.213949 systemd-resolved[314]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 16:01:20.213955 systemd-resolved[314]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 16:01:20.214015 systemd-resolved[314]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 16:01:20.255153 systemd-resolved[314]: Defaulting to hostname 'linux'. Nov 5 16:01:20.257675 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 16:01:20.259136 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 16:01:20.402276 kernel: Loading iSCSI transport class v2.0-870. Nov 5 16:01:20.492284 kernel: iscsi: registered transport (tcp) Nov 5 16:01:20.516530 kernel: iscsi: registered transport (qla4xxx) Nov 5 16:01:20.516624 kernel: QLogic iSCSI HBA Driver Nov 5 16:01:20.544390 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 16:01:20.566773 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 16:01:20.567930 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 16:01:20.627655 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 16:01:20.629613 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 16:01:20.632330 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 16:01:20.668457 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 16:01:20.673465 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 16:01:20.715383 systemd-udevd[572]: Using default interface naming scheme 'v257'. Nov 5 16:01:20.734788 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 16:01:20.742197 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 16:01:20.756506 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 16:01:20.762443 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 16:01:20.775159 dracut-pre-trigger[658]: rd.md=0: removing MD RAID activation Nov 5 16:01:20.817600 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 16:01:20.820593 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 16:01:20.830805 systemd-networkd[668]: lo: Link UP Nov 5 16:01:20.830816 systemd-networkd[668]: lo: Gained carrier Nov 5 16:01:20.832297 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 16:01:20.833860 systemd[1]: Reached target network.target - Network. Nov 5 16:01:20.888817 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 16:01:20.892542 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 16:01:21.007872 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 16:01:21.008277 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:01:21.009341 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 16:01:21.014514 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 16:01:21.022399 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 5 16:01:21.022722 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 5 16:01:21.026364 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Nov 5 16:01:21.032331 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:50:8a:0f:0c:f7 Nov 5 16:01:21.034575 (udev-worker)[714]: Network interface NamePolicy= disabled on kernel command line. Nov 5 16:01:21.077945 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 16:01:21.078086 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:01:21.084477 kernel: cryptd: max_cpu_qlen set to 1000 Nov 5 16:01:21.081279 systemd-networkd[668]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 16:01:21.081289 systemd-networkd[668]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 16:01:21.089499 systemd-networkd[668]: eth0: Link UP Nov 5 16:01:21.089719 systemd-networkd[668]: eth0: Gained carrier Nov 5 16:01:21.089739 systemd-networkd[668]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 16:01:21.096178 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 16:01:21.102333 systemd-networkd[668]: eth0: DHCPv4 address 172.31.17.172/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 5 16:01:21.148279 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Nov 5 16:01:21.176354 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:01:21.220725 kernel: AES CTR mode by8 optimization enabled Nov 5 16:01:21.239292 kernel: nvme nvme0: using unchecked data buffer Nov 5 16:01:21.342457 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 5 16:01:21.352297 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 16:01:21.383733 disk-uuid[843]: Primary Header is updated. Nov 5 16:01:21.383733 disk-uuid[843]: Secondary Entries is updated. Nov 5 16:01:21.383733 disk-uuid[843]: Secondary Header is updated. Nov 5 16:01:21.441884 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 5 16:01:21.458495 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 5 16:01:21.479414 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 5 16:01:21.762043 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 16:01:21.763983 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 16:01:21.764538 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 16:01:21.765821 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 16:01:21.768035 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 16:01:21.803051 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 16:01:22.547693 disk-uuid[854]: Warning: The kernel is still using the old partition table. Nov 5 16:01:22.547693 disk-uuid[854]: The new table will be used at the next reboot or after you Nov 5 16:01:22.547693 disk-uuid[854]: run partprobe(8) or kpartx(8) Nov 5 16:01:22.547693 disk-uuid[854]: The operation has completed successfully. Nov 5 16:01:22.557386 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 16:01:22.557533 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 16:01:22.559499 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 16:01:22.599292 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1083) Nov 5 16:01:22.603566 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 16:01:22.603642 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 5 16:01:22.648856 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 5 16:01:22.648927 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 5 16:01:22.657331 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 16:01:22.658075 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 16:01:22.659951 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 16:01:22.730456 systemd-networkd[668]: eth0: Gained IPv6LL Nov 5 16:01:23.783215 ignition[1102]: Ignition 2.22.0 Nov 5 16:01:23.783233 ignition[1102]: Stage: fetch-offline Nov 5 16:01:23.783496 ignition[1102]: no configs at "/usr/lib/ignition/base.d" Nov 5 16:01:23.783510 ignition[1102]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 16:01:23.784002 ignition[1102]: Ignition finished successfully Nov 5 16:01:23.785934 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 16:01:23.788496 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 5 16:01:23.824822 ignition[1109]: Ignition 2.22.0 Nov 5 16:01:23.824840 ignition[1109]: Stage: fetch Nov 5 16:01:23.825243 ignition[1109]: no configs at "/usr/lib/ignition/base.d" Nov 5 16:01:23.825285 ignition[1109]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 16:01:23.825399 ignition[1109]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 16:01:23.897148 ignition[1109]: PUT result: OK Nov 5 16:01:23.899613 ignition[1109]: parsed url from cmdline: "" Nov 5 16:01:23.899624 ignition[1109]: no config URL provided Nov 5 16:01:23.899632 ignition[1109]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 16:01:23.899650 ignition[1109]: no config at "/usr/lib/ignition/user.ign" Nov 5 16:01:23.899668 ignition[1109]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 16:01:23.900623 ignition[1109]: PUT result: OK Nov 5 16:01:23.900684 ignition[1109]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 5 16:01:23.901713 ignition[1109]: GET result: OK Nov 5 16:01:23.901829 ignition[1109]: parsing config with SHA512: 50191a075f8deaae691dd2f630a6f9795e99253418069ddd201fa1f6d663bb37784c8b8dc3a2f7619fad9f3c9f0a678033003cdf02af40eeb5c0d5fb8fd11a7d Nov 5 16:01:23.907557 unknown[1109]: fetched base config from "system" Nov 5 16:01:23.907567 unknown[1109]: fetched base config from "system" Nov 5 16:01:23.907893 ignition[1109]: fetch: fetch complete Nov 5 16:01:23.907572 unknown[1109]: fetched user config from "aws" Nov 5 16:01:23.907897 ignition[1109]: fetch: fetch passed Nov 5 16:01:23.907941 ignition[1109]: Ignition finished successfully Nov 5 16:01:23.910241 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 5 16:01:23.912073 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 16:01:23.948526 ignition[1116]: Ignition 2.22.0 Nov 5 16:01:23.948537 ignition[1116]: Stage: kargs Nov 5 16:01:23.948828 ignition[1116]: no configs at "/usr/lib/ignition/base.d" Nov 5 16:01:23.948836 ignition[1116]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 16:01:23.948916 ignition[1116]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 16:01:23.950296 ignition[1116]: PUT result: OK Nov 5 16:01:23.956444 ignition[1116]: kargs: kargs passed Nov 5 16:01:23.956536 ignition[1116]: Ignition finished successfully Nov 5 16:01:23.958717 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 16:01:23.960644 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 16:01:23.993472 ignition[1122]: Ignition 2.22.0 Nov 5 16:01:23.993486 ignition[1122]: Stage: disks Nov 5 16:01:23.993874 ignition[1122]: no configs at "/usr/lib/ignition/base.d" Nov 5 16:01:23.993886 ignition[1122]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 16:01:23.993998 ignition[1122]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 16:01:23.999914 ignition[1122]: PUT result: OK Nov 5 16:01:24.003451 ignition[1122]: disks: disks passed Nov 5 16:01:24.003528 ignition[1122]: Ignition finished successfully Nov 5 16:01:24.005815 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 16:01:24.006495 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 16:01:24.007073 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 16:01:24.007698 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 16:01:24.008312 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 16:01:24.008900 systemd[1]: Reached target basic.target - Basic System. Nov 5 16:01:24.010704 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 16:01:24.141861 systemd-fsck[1131]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Nov 5 16:01:24.144716 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 16:01:24.147422 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 16:01:24.422293 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f3db699e-c9e0-4f6b-8c2b-aa40a78cd116 r/w with ordered data mode. Quota mode: none. Nov 5 16:01:24.422561 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 16:01:24.423712 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 16:01:24.512867 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 16:01:24.516368 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 16:01:24.518446 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 16:01:24.519338 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 16:01:24.519373 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 16:01:24.531143 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 16:01:24.533504 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 16:01:24.547282 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1150) Nov 5 16:01:24.550291 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 16:01:24.550361 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 5 16:01:24.558414 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 5 16:01:24.558480 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 5 16:01:24.560903 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 16:01:25.667584 initrd-setup-root[1174]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 16:01:25.699640 initrd-setup-root[1181]: cut: /sysroot/etc/group: No such file or directory Nov 5 16:01:25.704612 initrd-setup-root[1188]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 16:01:25.710566 initrd-setup-root[1195]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 16:01:26.517833 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 16:01:26.521035 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 16:01:26.525435 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 16:01:26.553909 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 16:01:26.557311 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 16:01:26.590030 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 16:01:26.597978 ignition[1263]: INFO : Ignition 2.22.0 Nov 5 16:01:26.597978 ignition[1263]: INFO : Stage: mount Nov 5 16:01:26.599803 ignition[1263]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 16:01:26.599803 ignition[1263]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 16:01:26.599803 ignition[1263]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 16:01:26.599803 ignition[1263]: INFO : PUT result: OK Nov 5 16:01:26.602340 ignition[1263]: INFO : mount: mount passed Nov 5 16:01:26.603334 ignition[1263]: INFO : Ignition finished successfully Nov 5 16:01:26.604232 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 16:01:26.606443 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 16:01:26.628659 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 16:01:26.659300 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1274) Nov 5 16:01:26.663055 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 16:01:26.663119 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 5 16:01:26.672108 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 5 16:01:26.672190 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 5 16:01:26.674082 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 16:01:26.710808 ignition[1291]: INFO : Ignition 2.22.0 Nov 5 16:01:26.710808 ignition[1291]: INFO : Stage: files Nov 5 16:01:26.712377 ignition[1291]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 16:01:26.712377 ignition[1291]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 16:01:26.712377 ignition[1291]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 16:01:26.713843 ignition[1291]: INFO : PUT result: OK Nov 5 16:01:26.715706 ignition[1291]: DEBUG : files: compiled without relabeling support, skipping Nov 5 16:01:26.717004 ignition[1291]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 16:01:26.717004 ignition[1291]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 16:01:26.726443 ignition[1291]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 16:01:26.727516 ignition[1291]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 16:01:26.728153 ignition[1291]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 16:01:26.727928 unknown[1291]: wrote ssh authorized keys file for user: core Nov 5 16:01:26.761946 ignition[1291]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 5 16:01:26.763413 ignition[1291]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 5 16:01:26.836117 ignition[1291]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 16:01:27.034626 ignition[1291]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 5 16:01:27.035915 ignition[1291]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 16:01:27.035915 ignition[1291]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 16:01:27.035915 ignition[1291]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 16:01:27.035915 ignition[1291]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 16:01:27.035915 ignition[1291]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 16:01:27.035915 ignition[1291]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 16:01:27.035915 ignition[1291]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 16:01:27.035915 ignition[1291]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 16:01:27.041836 ignition[1291]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 16:01:27.041836 ignition[1291]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 16:01:27.041836 ignition[1291]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 16:01:27.044796 ignition[1291]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 16:01:27.044796 ignition[1291]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 16:01:27.044796 ignition[1291]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 5 16:01:27.544469 ignition[1291]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 16:01:28.076352 ignition[1291]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 16:01:28.076352 ignition[1291]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 16:01:28.078394 ignition[1291]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 16:01:28.082049 ignition[1291]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 16:01:28.082049 ignition[1291]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 16:01:28.082049 ignition[1291]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 5 16:01:28.085734 ignition[1291]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 16:01:28.085734 ignition[1291]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 16:01:28.085734 ignition[1291]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 16:01:28.085734 ignition[1291]: INFO : files: files passed Nov 5 16:01:28.085734 ignition[1291]: INFO : Ignition finished successfully Nov 5 16:01:28.083941 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 16:01:28.087479 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 16:01:28.090989 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 16:01:28.101360 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 16:01:28.101501 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 16:01:28.113756 initrd-setup-root-after-ignition[1323]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 16:01:28.116068 initrd-setup-root-after-ignition[1327]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 16:01:28.117095 initrd-setup-root-after-ignition[1323]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 16:01:28.118719 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 16:01:28.119643 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 16:01:28.121781 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 16:01:28.185536 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 16:01:28.185672 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 16:01:28.187508 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 16:01:28.188330 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 16:01:28.189494 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 16:01:28.190734 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 16:01:28.221583 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 16:01:28.224126 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 16:01:28.252774 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 16:01:28.253165 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 16:01:28.253908 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 16:01:28.255170 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 16:01:28.256058 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 16:01:28.256327 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 16:01:28.257494 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 16:01:28.258413 systemd[1]: Stopped target basic.target - Basic System. Nov 5 16:01:28.259385 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 16:01:28.260119 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 16:01:28.260959 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 16:01:28.261758 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 16:01:28.262546 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 16:01:28.263464 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 16:01:28.264337 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 16:01:28.265132 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 16:01:28.266264 systemd[1]: Stopped target swap.target - Swaps. Nov 5 16:01:28.267190 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 16:01:28.267472 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 16:01:28.268529 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 16:01:28.269339 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 16:01:28.270064 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 16:01:28.270233 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 16:01:28.271061 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 16:01:28.271246 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 16:01:28.272280 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 16:01:28.272490 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 16:01:28.273658 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 16:01:28.273872 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 16:01:28.277381 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 16:01:28.280372 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 16:01:28.281024 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 16:01:28.281296 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 16:01:28.283571 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 16:01:28.283801 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 16:01:28.286246 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 16:01:28.286486 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 16:01:28.293887 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 16:01:28.294012 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 16:01:28.322678 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 16:01:28.324018 ignition[1347]: INFO : Ignition 2.22.0 Nov 5 16:01:28.324018 ignition[1347]: INFO : Stage: umount Nov 5 16:01:28.325665 ignition[1347]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 16:01:28.325665 ignition[1347]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 16:01:28.325665 ignition[1347]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 16:01:28.327066 ignition[1347]: INFO : PUT result: OK Nov 5 16:01:28.328593 ignition[1347]: INFO : umount: umount passed Nov 5 16:01:28.329163 ignition[1347]: INFO : Ignition finished successfully Nov 5 16:01:28.331051 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 16:01:28.331225 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 16:01:28.332339 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 16:01:28.332418 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 16:01:28.332894 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 16:01:28.332957 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 16:01:28.333583 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 5 16:01:28.333652 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 5 16:01:28.334313 systemd[1]: Stopped target network.target - Network. Nov 5 16:01:28.335003 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 16:01:28.335070 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 16:01:28.336495 systemd[1]: Stopped target paths.target - Path Units. Nov 5 16:01:28.337071 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 16:01:28.339322 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 16:01:28.339783 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 16:01:28.340673 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 16:01:28.341338 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 16:01:28.341399 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 16:01:28.341992 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 16:01:28.342040 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 16:01:28.342635 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 16:01:28.342736 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 16:01:28.343489 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 16:01:28.343556 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 16:01:28.344238 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 16:01:28.344897 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 16:01:28.352972 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 16:01:28.353133 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 16:01:28.357462 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 16:01:28.357615 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 16:01:28.360317 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 16:01:28.361279 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 16:01:28.361329 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 16:01:28.363502 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 16:01:28.364645 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 16:01:28.365234 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 16:01:28.367114 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 16:01:28.367746 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 16:01:28.368985 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 16:01:28.369491 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 16:01:28.370613 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 16:01:28.388108 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 16:01:28.388420 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 16:01:28.389484 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 16:01:28.389545 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 16:01:28.390232 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 16:01:28.390323 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 16:01:28.390945 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 16:01:28.391019 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 16:01:28.394515 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 16:01:28.394592 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 16:01:28.395793 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 16:01:28.395868 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 16:01:28.399469 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 16:01:28.400014 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 16:01:28.400092 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 16:01:28.402388 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 16:01:28.402460 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 16:01:28.403628 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 5 16:01:28.403699 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 16:01:28.405386 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 16:01:28.405453 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 16:01:28.406011 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 16:01:28.406072 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:01:28.422072 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 16:01:28.423568 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 16:01:28.428207 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 16:01:28.428406 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 16:01:28.468310 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 16:01:28.468425 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 16:01:28.469573 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 16:01:28.470089 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 16:01:28.470151 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 16:01:28.471707 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 16:01:28.491143 systemd[1]: Switching root. Nov 5 16:01:28.596309 systemd-journald[288]: Received SIGTERM from PID 1 (systemd). Nov 5 16:01:28.596405 systemd-journald[288]: Journal stopped Nov 5 16:01:32.015018 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 16:01:32.015107 kernel: SELinux: policy capability open_perms=1 Nov 5 16:01:32.015138 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 16:01:32.015171 kernel: SELinux: policy capability always_check_network=0 Nov 5 16:01:32.015194 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 16:01:32.015222 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 16:01:32.015289 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 16:01:32.015321 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 16:01:32.015344 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 16:01:32.015364 kernel: audit: type=1403 audit(1762358489.292:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 16:01:32.015386 systemd[1]: Successfully loaded SELinux policy in 104.994ms. Nov 5 16:01:32.015412 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.275ms. Nov 5 16:01:32.015434 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 16:01:32.016359 systemd[1]: Detected virtualization amazon. Nov 5 16:01:32.016396 systemd[1]: Detected architecture x86-64. Nov 5 16:01:32.016421 systemd[1]: Detected first boot. Nov 5 16:01:32.016443 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 16:01:32.016471 zram_generator::config[1391]: No configuration found. Nov 5 16:01:32.016495 kernel: Guest personality initialized and is inactive Nov 5 16:01:32.016517 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 5 16:01:32.016538 kernel: Initialized host personality Nov 5 16:01:32.016558 kernel: NET: Registered PF_VSOCK protocol family Nov 5 16:01:32.016583 systemd[1]: Populated /etc with preset unit settings. Nov 5 16:01:32.016604 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 16:01:32.016626 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 16:01:32.016649 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 16:01:32.016672 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 16:01:32.016695 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 16:01:32.016717 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 16:01:32.016742 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 16:01:32.016763 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 16:01:32.016786 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 16:01:32.016808 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 16:01:32.016830 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 16:01:32.016852 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 16:01:32.016875 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 16:01:32.016900 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 16:01:32.016922 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 16:01:32.016945 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 16:01:32.016968 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 16:01:32.016989 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 16:01:32.017015 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 16:01:32.017037 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 16:01:32.017058 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 16:01:32.017080 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 16:01:32.017103 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 16:01:32.017125 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 16:01:32.017147 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 16:01:32.017170 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 16:01:32.017196 systemd[1]: Reached target slices.target - Slice Units. Nov 5 16:01:32.017217 systemd[1]: Reached target swap.target - Swaps. Nov 5 16:01:32.017240 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 16:01:32.017276 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 16:01:32.017299 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 16:01:32.017322 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 16:01:32.017344 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 16:01:32.017369 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 16:01:32.017392 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 16:01:32.017412 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 16:01:32.017433 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 16:01:32.017455 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 16:01:32.017478 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:01:32.017498 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 16:01:32.018889 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 16:01:32.018939 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 16:01:32.018965 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 16:01:32.018989 systemd[1]: Reached target machines.target - Containers. Nov 5 16:01:32.019044 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 16:01:32.019065 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 16:01:32.019087 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 16:01:32.019115 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 16:01:32.019139 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 16:01:32.019159 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 16:01:32.019178 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 16:01:32.019199 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 16:01:32.019222 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 16:01:32.019244 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 16:01:32.019325 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 16:01:32.019345 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 16:01:32.019364 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 16:01:32.019384 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 16:01:32.019405 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 16:01:32.019423 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 16:01:32.019455 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 16:01:32.019476 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 16:01:32.019499 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 16:01:32.019520 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 16:01:32.019543 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 16:01:32.019563 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:01:32.019582 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 16:01:32.019602 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 16:01:32.019623 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 16:01:32.019646 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 16:01:32.019667 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 16:01:32.019693 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 16:01:32.019714 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 16:01:32.019736 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 16:01:32.019758 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 16:01:32.019781 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 16:01:32.019803 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 16:01:32.019824 kernel: fuse: init (API version 7.41) Nov 5 16:01:32.019852 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 16:01:32.019874 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 16:01:32.019897 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 16:01:32.019920 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 16:01:32.019946 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 16:01:32.019967 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 16:01:32.019988 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 16:01:32.020054 systemd-journald[1470]: Collecting audit messages is disabled. Nov 5 16:01:32.020096 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 16:01:32.020119 systemd-journald[1470]: Journal started Nov 5 16:01:32.020159 systemd-journald[1470]: Runtime Journal (/run/log/journal/ec23f1513dc5dcbfb8811be8d96c46b9) is 4.7M, max 38.1M, 33.3M free. Nov 5 16:01:31.632232 systemd[1]: Queued start job for default target multi-user.target. Nov 5 16:01:31.647088 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 5 16:01:31.648428 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 16:01:32.025306 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 16:01:32.025318 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 16:01:32.036555 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 16:01:32.038678 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 16:01:32.041403 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 16:01:32.047820 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 16:01:32.052373 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 16:01:32.052438 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 16:01:32.058866 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 16:01:32.062493 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 16:01:32.068465 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 16:01:32.070698 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 16:01:32.071815 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 16:01:32.076138 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 16:01:32.076918 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 16:01:32.078593 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 16:01:32.086538 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 16:01:32.091527 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 16:01:32.096677 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 16:01:32.097524 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 16:01:32.099373 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 16:01:32.118851 systemd-journald[1470]: Time spent on flushing to /var/log/journal/ec23f1513dc5dcbfb8811be8d96c46b9 is 58.414ms for 999 entries. Nov 5 16:01:32.118851 systemd-journald[1470]: System Journal (/var/log/journal/ec23f1513dc5dcbfb8811be8d96c46b9) is 8M, max 588.1M, 580.1M free. Nov 5 16:01:32.210660 systemd-journald[1470]: Received client request to flush runtime journal. Nov 5 16:01:32.210752 kernel: loop1: detected capacity change from 0 to 224512 Nov 5 16:01:32.117718 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 16:01:32.132619 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 16:01:32.134247 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 16:01:32.136526 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 16:01:32.173511 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 16:01:32.218345 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 16:01:32.241758 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 16:01:32.293354 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 16:01:32.297519 kernel: ACPI: bus type drm_connector registered Nov 5 16:01:32.294755 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 16:01:32.295021 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 16:01:32.298869 systemd-tmpfiles[1524]: ACLs are not supported, ignoring. Nov 5 16:01:32.299361 systemd-tmpfiles[1524]: ACLs are not supported, ignoring. Nov 5 16:01:32.306355 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 16:01:32.310453 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 16:01:32.370780 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 16:01:32.375634 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 16:01:32.378533 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 16:01:32.411226 systemd-tmpfiles[1546]: ACLs are not supported, ignoring. Nov 5 16:01:32.411643 systemd-tmpfiles[1546]: ACLs are not supported, ignoring. Nov 5 16:01:32.417074 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 16:01:32.470435 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 16:01:32.520029 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 16:01:32.571279 kernel: loop2: detected capacity change from 0 to 128048 Nov 5 16:01:32.650232 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 16:01:32.704719 systemd-resolved[1545]: Positive Trust Anchors: Nov 5 16:01:32.705126 systemd-resolved[1545]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 16:01:32.705137 systemd-resolved[1545]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 16:01:32.705203 systemd-resolved[1545]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 16:01:32.712129 systemd-resolved[1545]: Defaulting to hostname 'linux'. Nov 5 16:01:32.713995 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 16:01:32.714911 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 16:01:32.989307 kernel: loop3: detected capacity change from 0 to 110984 Nov 5 16:01:33.041011 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 16:01:33.043903 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 16:01:33.081268 systemd-udevd[1559]: Using default interface naming scheme 'v257'. Nov 5 16:01:33.348684 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 16:01:33.356971 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 16:01:33.381279 kernel: loop4: detected capacity change from 0 to 72360 Nov 5 16:01:33.414349 (udev-worker)[1574]: Network interface NamePolicy= disabled on kernel command line. Nov 5 16:01:33.456527 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 16:01:33.527098 systemd-networkd[1567]: lo: Link UP Nov 5 16:01:33.527112 systemd-networkd[1567]: lo: Gained carrier Nov 5 16:01:33.528974 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 16:01:33.529577 systemd-networkd[1567]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 16:01:33.529588 systemd-networkd[1567]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 16:01:33.530779 systemd[1]: Reached target network.target - Network. Nov 5 16:01:33.532411 systemd-networkd[1567]: eth0: Link UP Nov 5 16:01:33.532628 systemd-networkd[1567]: eth0: Gained carrier Nov 5 16:01:33.532646 systemd-networkd[1567]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 16:01:33.533838 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 16:01:33.538534 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 16:01:33.546405 systemd-networkd[1567]: eth0: DHCPv4 address 172.31.17.172/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 5 16:01:33.556279 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 5 16:01:33.589963 kernel: ACPI: button: Power Button [PWRF] Nov 5 16:01:33.590075 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Nov 5 16:01:33.604331 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Nov 5 16:01:33.608275 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 16:01:33.616286 kernel: ACPI: button: Sleep Button [SLPF] Nov 5 16:01:33.703687 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 16:01:33.725945 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 16:01:33.817277 kernel: loop5: detected capacity change from 0 to 224512 Nov 5 16:01:33.852285 kernel: loop6: detected capacity change from 0 to 128048 Nov 5 16:01:33.875284 kernel: loop7: detected capacity change from 0 to 110984 Nov 5 16:01:33.897334 kernel: loop1: detected capacity change from 0 to 72360 Nov 5 16:01:33.916868 (sd-merge)[1614]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Nov 5 16:01:33.922035 (sd-merge)[1614]: Merged extensions into '/usr'. Nov 5 16:01:33.926773 systemd[1]: Reload requested from client PID 1523 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 16:01:33.926792 systemd[1]: Reloading... Nov 5 16:01:34.121325 zram_generator::config[1718]: No configuration found. Nov 5 16:01:34.393564 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 5 16:01:34.394390 systemd[1]: Reloading finished in 466 ms. Nov 5 16:01:34.416879 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:01:34.419495 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 16:01:34.472018 systemd[1]: Starting ensure-sysext.service... Nov 5 16:01:34.475420 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 16:01:34.476897 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 16:01:34.496323 systemd-tmpfiles[1781]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 16:01:34.496352 systemd-tmpfiles[1781]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 16:01:34.496586 systemd-tmpfiles[1781]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 16:01:34.496833 systemd-tmpfiles[1781]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 16:01:34.497657 systemd-tmpfiles[1781]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 16:01:34.497888 systemd-tmpfiles[1781]: ACLs are not supported, ignoring. Nov 5 16:01:34.497942 systemd-tmpfiles[1781]: ACLs are not supported, ignoring. Nov 5 16:01:34.498315 systemd[1]: Reload requested from client PID 1779 ('systemctl') (unit ensure-sysext.service)... Nov 5 16:01:34.498332 systemd[1]: Reloading... Nov 5 16:01:34.506717 systemd-tmpfiles[1781]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 16:01:34.506725 systemd-tmpfiles[1781]: Skipping /boot Nov 5 16:01:34.516149 systemd-tmpfiles[1781]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 16:01:34.516302 systemd-tmpfiles[1781]: Skipping /boot Nov 5 16:01:34.583285 zram_generator::config[1818]: No configuration found. Nov 5 16:01:34.830422 systemd[1]: Reloading finished in 331 ms. Nov 5 16:01:34.871022 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 16:01:34.872355 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 16:01:34.882579 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 16:01:34.887616 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 16:01:34.890382 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 16:01:34.895438 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 16:01:34.898632 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 16:01:34.905247 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:01:34.905580 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 16:01:34.908741 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 16:01:34.916395 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 16:01:34.920029 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 16:01:34.921073 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 16:01:34.921282 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 16:01:34.921436 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:01:34.929467 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:01:34.929790 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 16:01:34.930043 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 16:01:34.930187 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 16:01:34.930352 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:01:34.938160 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:01:34.939455 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 16:01:34.948679 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 16:01:34.949667 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 16:01:34.949863 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 16:01:34.950115 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 16:01:34.952024 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:01:34.954842 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 16:01:34.955950 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 16:01:34.956744 systemd-networkd[1567]: eth0: Gained IPv6LL Nov 5 16:01:34.963554 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 16:01:34.970779 systemd[1]: Finished ensure-sysext.service. Nov 5 16:01:34.974467 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 16:01:34.975932 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 16:01:34.977743 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 16:01:34.979242 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 16:01:34.980712 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 16:01:34.980943 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 16:01:34.981671 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 16:01:34.981877 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 16:01:34.993381 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 16:01:34.997915 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 16:01:34.999774 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 16:01:35.146595 augenrules[1905]: No rules Nov 5 16:01:35.148470 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 16:01:35.148949 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 16:01:35.183585 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 16:01:35.188765 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 16:01:38.144263 ldconfig[1871]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 16:01:38.149067 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 16:01:38.151858 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 16:01:38.175997 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 16:01:38.176780 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 16:01:38.177435 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 16:01:38.177906 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 16:01:38.178351 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 5 16:01:38.179075 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 16:01:38.179586 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 16:01:38.179968 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 16:01:38.180358 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 16:01:38.180404 systemd[1]: Reached target paths.target - Path Units. Nov 5 16:01:38.180777 systemd[1]: Reached target timers.target - Timer Units. Nov 5 16:01:38.182167 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 16:01:38.183986 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 16:01:38.186604 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 16:01:38.187216 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 16:01:38.187634 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 16:01:38.194216 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 16:01:38.195205 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 16:01:38.196422 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 16:01:38.197880 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 16:01:38.198339 systemd[1]: Reached target basic.target - Basic System. Nov 5 16:01:38.198778 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 16:01:38.198907 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 16:01:38.200064 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 16:01:38.202435 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 5 16:01:38.207023 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 16:01:38.212324 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 16:01:38.215417 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 16:01:38.220714 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 16:01:38.221361 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 16:01:38.225562 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 5 16:01:38.234218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:01:38.239484 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 16:01:38.246549 systemd[1]: Started ntpd.service - Network Time Service. Nov 5 16:01:38.266532 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 16:01:38.271546 jq[1921]: false Nov 5 16:01:38.276180 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 16:01:38.280464 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 5 16:01:38.297544 extend-filesystems[1922]: Found /dev/nvme0n1p6 Nov 5 16:01:38.303580 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 16:01:38.316587 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 16:01:38.326605 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 16:01:38.328381 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 16:01:38.330345 extend-filesystems[1922]: Found /dev/nvme0n1p9 Nov 5 16:01:38.329093 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 16:01:38.336920 google_oslogin_nss_cache[1923]: oslogin_cache_refresh[1923]: Refreshing passwd entry cache Nov 5 16:01:38.334145 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 16:01:38.345338 extend-filesystems[1922]: Checking size of /dev/nvme0n1p9 Nov 5 16:01:38.338554 oslogin_cache_refresh[1923]: Refreshing passwd entry cache Nov 5 16:01:38.360524 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 16:01:38.372698 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 16:01:38.378106 oslogin_cache_refresh[1923]: Failure getting users, quitting Nov 5 16:01:38.379567 google_oslogin_nss_cache[1923]: oslogin_cache_refresh[1923]: Failure getting users, quitting Nov 5 16:01:38.379567 google_oslogin_nss_cache[1923]: oslogin_cache_refresh[1923]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 16:01:38.379567 google_oslogin_nss_cache[1923]: oslogin_cache_refresh[1923]: Refreshing group entry cache Nov 5 16:01:38.374774 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 16:01:38.378130 oslogin_cache_refresh[1923]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 16:01:38.375079 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 16:01:38.378194 oslogin_cache_refresh[1923]: Refreshing group entry cache Nov 5 16:01:38.389795 google_oslogin_nss_cache[1923]: oslogin_cache_refresh[1923]: Failure getting groups, quitting Nov 5 16:01:38.389795 google_oslogin_nss_cache[1923]: oslogin_cache_refresh[1923]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 16:01:38.386823 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 16:01:38.390032 jq[1947]: true Nov 5 16:01:38.381212 oslogin_cache_refresh[1923]: Failure getting groups, quitting Nov 5 16:01:38.387338 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 16:01:38.381229 oslogin_cache_refresh[1923]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 16:01:38.401932 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 5 16:01:38.404348 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 5 16:01:38.428279 coreos-metadata[1918]: Nov 05 16:01:38.422 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 5 16:01:38.428279 coreos-metadata[1918]: Nov 05 16:01:38.424 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 5 16:01:38.428279 coreos-metadata[1918]: Nov 05 16:01:38.426 INFO Fetch successful Nov 5 16:01:38.428279 coreos-metadata[1918]: Nov 05 16:01:38.426 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 5 16:01:38.429438 coreos-metadata[1918]: Nov 05 16:01:38.429 INFO Fetch successful Nov 5 16:01:38.429529 coreos-metadata[1918]: Nov 05 16:01:38.429 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 5 16:01:38.430372 extend-filesystems[1922]: Resized partition /dev/nvme0n1p9 Nov 5 16:01:38.434484 coreos-metadata[1918]: Nov 05 16:01:38.430 INFO Fetch successful Nov 5 16:01:38.434484 coreos-metadata[1918]: Nov 05 16:01:38.430 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 5 16:01:38.434484 coreos-metadata[1918]: Nov 05 16:01:38.433 INFO Fetch successful Nov 5 16:01:38.434484 coreos-metadata[1918]: Nov 05 16:01:38.433 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 5 16:01:38.437285 coreos-metadata[1918]: Nov 05 16:01:38.435 INFO Fetch failed with 404: resource not found Nov 5 16:01:38.437285 coreos-metadata[1918]: Nov 05 16:01:38.436 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 5 16:01:38.446307 coreos-metadata[1918]: Nov 05 16:01:38.439 INFO Fetch successful Nov 5 16:01:38.446307 coreos-metadata[1918]: Nov 05 16:01:38.439 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 5 16:01:38.446307 coreos-metadata[1918]: Nov 05 16:01:38.443 INFO Fetch successful Nov 5 16:01:38.446307 coreos-metadata[1918]: Nov 05 16:01:38.443 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 5 16:01:38.446307 coreos-metadata[1918]: Nov 05 16:01:38.446 INFO Fetch successful Nov 5 16:01:38.446307 coreos-metadata[1918]: Nov 05 16:01:38.446 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 5 16:01:38.447218 coreos-metadata[1918]: Nov 05 16:01:38.446 INFO Fetch successful Nov 5 16:01:38.447218 coreos-metadata[1918]: Nov 05 16:01:38.446 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 5 16:01:38.448300 coreos-metadata[1918]: Nov 05 16:01:38.447 INFO Fetch successful Nov 5 16:01:38.468247 jq[1957]: true Nov 5 16:01:38.499139 (ntainerd)[1979]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 16:01:38.509286 extend-filesystems[1988]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 16:01:38.526143 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Nov 5 16:01:38.526224 ntpd[1926]: 5 Nov 16:01:38 ntpd[1926]: ntpd 4.2.8p18@1.4062-o Wed Nov 5 13:12:24 UTC 2025 (1): Starting Nov 5 16:01:38.526224 ntpd[1926]: 5 Nov 16:01:38 ntpd[1926]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 5 16:01:38.526224 ntpd[1926]: 5 Nov 16:01:38 ntpd[1926]: ---------------------------------------------------- Nov 5 16:01:38.526224 ntpd[1926]: 5 Nov 16:01:38 ntpd[1926]: ntp-4 is maintained by Network Time Foundation, Nov 5 16:01:38.526224 ntpd[1926]: 5 Nov 16:01:38 ntpd[1926]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 5 16:01:38.526224 ntpd[1926]: 5 Nov 16:01:38 ntpd[1926]: corporation. Support and training for ntp-4 are Nov 5 16:01:38.526224 ntpd[1926]: 5 Nov 16:01:38 ntpd[1926]: available at https://www.nwtime.org/support Nov 5 16:01:38.526224 ntpd[1926]: 5 Nov 16:01:38 ntpd[1926]: ---------------------------------------------------- Nov 5 16:01:38.524404 ntpd[1926]: ntpd 4.2.8p18@1.4062-o Wed Nov 5 13:12:24 UTC 2025 (1): Starting Nov 5 16:01:38.524474 ntpd[1926]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 5 16:01:38.524485 ntpd[1926]: ---------------------------------------------------- Nov 5 16:01:38.524494 ntpd[1926]: ntp-4 is maintained by Network Time Foundation, Nov 5 16:01:38.524503 ntpd[1926]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 5 16:01:38.524512 ntpd[1926]: corporation. Support and training for ntp-4 are Nov 5 16:01:38.524522 ntpd[1926]: available at https://www.nwtime.org/support Nov 5 16:01:38.524531 ntpd[1926]: ---------------------------------------------------- Nov 5 16:01:38.555838 ntpd[1926]: 5 Nov 16:01:38 ntpd[1926]: proto: precision = 0.062 usec (-24) Nov 5 16:01:38.544186 ntpd[1926]: proto: precision = 0.062 usec (-24) Nov 5 16:01:38.559292 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Nov 5 16:01:38.576361 ntpd[1926]: 5 Nov 16:01:38 ntpd[1926]: basedate set to 2025-10-24 Nov 5 16:01:38.576361 ntpd[1926]: 5 Nov 16:01:38 ntpd[1926]: gps base set to 2025-10-26 (week 2390) Nov 5 16:01:38.576361 ntpd[1926]: 5 Nov 16:01:38 ntpd[1926]: Listen and drop on 0 v6wildcard [::]:123 Nov 5 16:01:38.576361 ntpd[1926]: 5 Nov 16:01:38 ntpd[1926]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 5 16:01:38.576361 ntpd[1926]: 5 Nov 16:01:38 ntpd[1926]: Listen normally on 2 lo 127.0.0.1:123 Nov 5 16:01:38.576361 ntpd[1926]: 5 Nov 16:01:38 ntpd[1926]: Listen normally on 3 eth0 172.31.17.172:123 Nov 5 16:01:38.576361 ntpd[1926]: 5 Nov 16:01:38 ntpd[1926]: Listen normally on 4 lo [::1]:123 Nov 5 16:01:38.576361 ntpd[1926]: 5 Nov 16:01:38 ntpd[1926]: Listen normally on 5 eth0 [fe80::450:8aff:fe0f:cf7%2]:123 Nov 5 16:01:38.576361 ntpd[1926]: 5 Nov 16:01:38 ntpd[1926]: Listening on routing socket on fd #22 for interface updates Nov 5 16:01:38.557542 ntpd[1926]: basedate set to 2025-10-24 Nov 5 16:01:38.566042 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 16:01:38.557564 ntpd[1926]: gps base set to 2025-10-26 (week 2390) Nov 5 16:01:38.566363 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 16:01:38.557779 ntpd[1926]: Listen and drop on 0 v6wildcard [::]:123 Nov 5 16:01:38.557809 ntpd[1926]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 5 16:01:38.558038 ntpd[1926]: Listen normally on 2 lo 127.0.0.1:123 Nov 5 16:01:38.587485 extend-filesystems[1988]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 5 16:01:38.587485 extend-filesystems[1988]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 5 16:01:38.587485 extend-filesystems[1988]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Nov 5 16:01:38.580793 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 5 16:01:38.558065 ntpd[1926]: Listen normally on 3 eth0 172.31.17.172:123 Nov 5 16:01:38.629108 ntpd[1926]: 5 Nov 16:01:38 ntpd[1926]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 5 16:01:38.629108 ntpd[1926]: 5 Nov 16:01:38 ntpd[1926]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 5 16:01:38.629185 extend-filesystems[1922]: Resized filesystem in /dev/nvme0n1p9 Nov 5 16:01:38.591839 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 5 16:01:38.558094 ntpd[1926]: Listen normally on 4 lo [::1]:123 Nov 5 16:01:38.639547 update_engine[1940]: I20251105 16:01:38.632894 1940 main.cc:92] Flatcar Update Engine starting Nov 5 16:01:38.639547 update_engine[1940]: I20251105 16:01:38.639178 1940 update_check_scheduler.cc:74] Next update check in 3m48s Nov 5 16:01:38.594304 systemd-logind[1939]: Watching system buttons on /dev/input/event2 (Power Button) Nov 5 16:01:38.558120 ntpd[1926]: Listen normally on 5 eth0 [fe80::450:8aff:fe0f:cf7%2]:123 Nov 5 16:01:38.594331 systemd-logind[1939]: Watching system buttons on /dev/input/event3 (Sleep Button) Nov 5 16:01:38.558143 ntpd[1926]: Listening on routing socket on fd #22 for interface updates Nov 5 16:01:38.653168 tar[1953]: linux-amd64/LICENSE Nov 5 16:01:38.653168 tar[1953]: linux-amd64/helm Nov 5 16:01:38.594357 systemd-logind[1939]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 5 16:01:38.599898 ntpd[1926]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 5 16:01:38.605775 systemd-logind[1939]: New seat seat0. Nov 5 16:01:38.599932 ntpd[1926]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 5 16:01:38.611842 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 16:01:38.620582 dbus-daemon[1919]: [system] SELinux support is enabled Nov 5 16:01:38.612146 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 16:01:38.628747 dbus-daemon[1919]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1567 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 5 16:01:38.625529 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 16:01:38.641509 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 16:01:38.644473 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 16:01:38.649904 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 16:01:38.649946 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 16:01:38.650967 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 16:01:38.650992 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 16:01:38.660709 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 5 16:01:38.661617 systemd[1]: Started update-engine.service - Update Engine. Nov 5 16:01:38.662815 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 16:01:38.666526 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 16:01:38.672543 dbus-daemon[1919]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 5 16:01:38.685553 bash[2026]: Updated "/home/core/.ssh/authorized_keys" Nov 5 16:01:38.693696 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 5 16:01:38.697798 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 16:01:38.707692 systemd[1]: Starting sshkeys.service... Nov 5 16:01:38.752094 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 5 16:01:38.757479 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 5 16:01:39.114849 coreos-metadata[2052]: Nov 05 16:01:39.114 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 5 16:01:39.120548 coreos-metadata[2052]: Nov 05 16:01:39.120 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 5 16:01:39.121224 coreos-metadata[2052]: Nov 05 16:01:39.121 INFO Fetch successful Nov 5 16:01:39.121356 coreos-metadata[2052]: Nov 05 16:01:39.121 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 5 16:01:39.123429 coreos-metadata[2052]: Nov 05 16:01:39.123 INFO Fetch successful Nov 5 16:01:39.148286 unknown[2052]: wrote ssh authorized keys file for user: core Nov 5 16:01:39.219225 locksmithd[2030]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 16:01:39.227055 amazon-ssm-agent[2010]: Initializing new seelog logger Nov 5 16:01:39.232146 amazon-ssm-agent[2010]: New Seelog Logger Creation Complete Nov 5 16:01:39.232146 amazon-ssm-agent[2010]: 2025/11/05 16:01:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 16:01:39.232146 amazon-ssm-agent[2010]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 16:01:39.232146 amazon-ssm-agent[2010]: 2025/11/05 16:01:39 processing appconfig overrides Nov 5 16:01:39.239437 amazon-ssm-agent[2010]: 2025/11/05 16:01:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 16:01:39.239437 amazon-ssm-agent[2010]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 16:01:39.239437 amazon-ssm-agent[2010]: 2025/11/05 16:01:39 processing appconfig overrides Nov 5 16:01:39.239437 amazon-ssm-agent[2010]: 2025/11/05 16:01:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 16:01:39.239437 amazon-ssm-agent[2010]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 16:01:39.239437 amazon-ssm-agent[2010]: 2025/11/05 16:01:39 processing appconfig overrides Nov 5 16:01:39.239437 amazon-ssm-agent[2010]: 2025-11-05 16:01:39.2379 INFO Proxy environment variables: Nov 5 16:01:39.249277 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 5 16:01:39.254697 dbus-daemon[1919]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 5 16:01:39.255543 dbus-daemon[1919]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2036 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 5 16:01:39.267574 amazon-ssm-agent[2010]: 2025/11/05 16:01:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 16:01:39.267574 amazon-ssm-agent[2010]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 16:01:39.267574 amazon-ssm-agent[2010]: 2025/11/05 16:01:39 processing appconfig overrides Nov 5 16:01:39.271476 systemd[1]: Starting polkit.service - Authorization Manager... Nov 5 16:01:39.276199 update-ssh-keys[2137]: Updated "/home/core/.ssh/authorized_keys" Nov 5 16:01:39.277359 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 5 16:01:39.281708 systemd[1]: Finished sshkeys.service. Nov 5 16:01:39.339528 amazon-ssm-agent[2010]: 2025-11-05 16:01:39.2380 INFO https_proxy: Nov 5 16:01:39.442196 amazon-ssm-agent[2010]: 2025-11-05 16:01:39.2380 INFO http_proxy: Nov 5 16:01:39.510351 polkitd[2150]: Started polkitd version 126 Nov 5 16:01:39.522925 polkitd[2150]: Loading rules from directory /etc/polkit-1/rules.d Nov 5 16:01:39.523879 polkitd[2150]: Loading rules from directory /run/polkit-1/rules.d Nov 5 16:01:39.525695 polkitd[2150]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 5 16:01:39.528509 polkitd[2150]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 5 16:01:39.528975 polkitd[2150]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 5 16:01:39.529032 polkitd[2150]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 5 16:01:39.538535 polkitd[2150]: Finished loading, compiling and executing 2 rules Nov 5 16:01:39.538984 systemd[1]: Started polkit.service - Authorization Manager. Nov 5 16:01:39.550588 amazon-ssm-agent[2010]: 2025-11-05 16:01:39.2380 INFO no_proxy: Nov 5 16:01:39.546489 dbus-daemon[1919]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 5 16:01:39.551197 polkitd[2150]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 5 16:01:39.602372 systemd-hostnamed[2036]: Hostname set to (transient) Nov 5 16:01:39.602495 systemd-resolved[1545]: System hostname changed to 'ip-172-31-17-172'. Nov 5 16:01:39.645463 amazon-ssm-agent[2010]: 2025-11-05 16:01:39.2382 INFO Checking if agent identity type OnPrem can be assumed Nov 5 16:01:39.743377 amazon-ssm-agent[2010]: 2025-11-05 16:01:39.2384 INFO Checking if agent identity type EC2 can be assumed Nov 5 16:01:39.803107 sshd_keygen[1989]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 16:01:39.822678 containerd[1979]: time="2025-11-05T16:01:39Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 16:01:39.838283 containerd[1979]: time="2025-11-05T16:01:39.837411648Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 16:01:39.842764 amazon-ssm-agent[2010]: 2025-11-05 16:01:39.3476 INFO Agent will take identity from EC2 Nov 5 16:01:39.852887 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 16:01:39.859855 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 16:01:39.895274 containerd[1979]: time="2025-11-05T16:01:39.893194679Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.861µs" Nov 5 16:01:39.897331 containerd[1979]: time="2025-11-05T16:01:39.897284830Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 16:01:39.897475 containerd[1979]: time="2025-11-05T16:01:39.897457023Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 16:01:39.897731 containerd[1979]: time="2025-11-05T16:01:39.897710847Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 16:01:39.900835 containerd[1979]: time="2025-11-05T16:01:39.899299351Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 16:01:39.900835 containerd[1979]: time="2025-11-05T16:01:39.899360302Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 16:01:39.900835 containerd[1979]: time="2025-11-05T16:01:39.899459302Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 16:01:39.900835 containerd[1979]: time="2025-11-05T16:01:39.899475235Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 16:01:39.900835 containerd[1979]: time="2025-11-05T16:01:39.899755403Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 16:01:39.900835 containerd[1979]: time="2025-11-05T16:01:39.899773860Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 16:01:39.900835 containerd[1979]: time="2025-11-05T16:01:39.899791716Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 16:01:39.900835 containerd[1979]: time="2025-11-05T16:01:39.899806078Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 16:01:39.900835 containerd[1979]: time="2025-11-05T16:01:39.899897124Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 16:01:39.900835 containerd[1979]: time="2025-11-05T16:01:39.900122470Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 16:01:39.900835 containerd[1979]: time="2025-11-05T16:01:39.900165432Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 16:01:39.901287 containerd[1979]: time="2025-11-05T16:01:39.900183229Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 16:01:39.901287 containerd[1979]: time="2025-11-05T16:01:39.900246144Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 16:01:39.901287 containerd[1979]: time="2025-11-05T16:01:39.900596275Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 16:01:39.901287 containerd[1979]: time="2025-11-05T16:01:39.900679971Z" level=info msg="metadata content store policy set" policy=shared Nov 5 16:01:39.909380 containerd[1979]: time="2025-11-05T16:01:39.909341209Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 16:01:39.910862 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 16:01:39.912279 containerd[1979]: time="2025-11-05T16:01:39.911089281Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 16:01:39.912279 containerd[1979]: time="2025-11-05T16:01:39.911128323Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 16:01:39.912279 containerd[1979]: time="2025-11-05T16:01:39.911147935Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 16:01:39.912279 containerd[1979]: time="2025-11-05T16:01:39.911165522Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 16:01:39.912279 containerd[1979]: time="2025-11-05T16:01:39.911183955Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 16:01:39.912279 containerd[1979]: time="2025-11-05T16:01:39.911208568Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 16:01:39.912279 containerd[1979]: time="2025-11-05T16:01:39.911224457Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 16:01:39.912279 containerd[1979]: time="2025-11-05T16:01:39.911239278Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 16:01:39.912279 containerd[1979]: time="2025-11-05T16:01:39.911268432Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 16:01:39.912279 containerd[1979]: time="2025-11-05T16:01:39.911284602Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 16:01:39.912279 containerd[1979]: time="2025-11-05T16:01:39.911303798Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 16:01:39.912279 containerd[1979]: time="2025-11-05T16:01:39.911469328Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 16:01:39.912279 containerd[1979]: time="2025-11-05T16:01:39.911496898Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 16:01:39.912279 containerd[1979]: time="2025-11-05T16:01:39.911522348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 16:01:39.911182 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 16:01:39.912843 containerd[1979]: time="2025-11-05T16:01:39.911537059Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 16:01:39.912843 containerd[1979]: time="2025-11-05T16:01:39.911560603Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 16:01:39.912843 containerd[1979]: time="2025-11-05T16:01:39.911575672Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 16:01:39.912843 containerd[1979]: time="2025-11-05T16:01:39.911591423Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 16:01:39.912843 containerd[1979]: time="2025-11-05T16:01:39.911605245Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 16:01:39.912843 containerd[1979]: time="2025-11-05T16:01:39.911620962Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 16:01:39.912843 containerd[1979]: time="2025-11-05T16:01:39.911635732Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 16:01:39.912843 containerd[1979]: time="2025-11-05T16:01:39.911656315Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 16:01:39.912843 containerd[1979]: time="2025-11-05T16:01:39.911737384Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 16:01:39.912843 containerd[1979]: time="2025-11-05T16:01:39.911755335Z" level=info msg="Start snapshots syncer" Nov 5 16:01:39.912843 containerd[1979]: time="2025-11-05T16:01:39.911784554Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 16:01:39.913301 containerd[1979]: time="2025-11-05T16:01:39.912082560Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 16:01:39.913301 containerd[1979]: time="2025-11-05T16:01:39.912156019Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 16:01:39.913488 containerd[1979]: time="2025-11-05T16:01:39.912239666Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 16:01:39.915480 containerd[1979]: time="2025-11-05T16:01:39.915441707Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 16:01:39.915624 containerd[1979]: time="2025-11-05T16:01:39.915606153Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 16:01:39.915720 containerd[1979]: time="2025-11-05T16:01:39.915703517Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 16:01:39.915775 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 16:01:39.916530 containerd[1979]: time="2025-11-05T16:01:39.916503722Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 16:01:39.916636 containerd[1979]: time="2025-11-05T16:01:39.916619119Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 16:01:39.916712 containerd[1979]: time="2025-11-05T16:01:39.916697243Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 16:01:39.916780 containerd[1979]: time="2025-11-05T16:01:39.916767061Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 16:01:39.916886 containerd[1979]: time="2025-11-05T16:01:39.916870079Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 16:01:39.917068 containerd[1979]: time="2025-11-05T16:01:39.917044385Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 16:01:39.917120 containerd[1979]: time="2025-11-05T16:01:39.917083914Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 16:01:39.917158 containerd[1979]: time="2025-11-05T16:01:39.917139617Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 16:01:39.917195 containerd[1979]: time="2025-11-05T16:01:39.917162241Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 16:01:39.917195 containerd[1979]: time="2025-11-05T16:01:39.917177395Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 16:01:39.917285 containerd[1979]: time="2025-11-05T16:01:39.917191050Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 16:01:39.917285 containerd[1979]: time="2025-11-05T16:01:39.917203883Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 16:01:39.917285 containerd[1979]: time="2025-11-05T16:01:39.917217888Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 16:01:39.917285 containerd[1979]: time="2025-11-05T16:01:39.917232595Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 16:01:39.918541 containerd[1979]: time="2025-11-05T16:01:39.918513110Z" level=info msg="runtime interface created" Nov 5 16:01:39.918604 containerd[1979]: time="2025-11-05T16:01:39.918590331Z" level=info msg="created NRI interface" Nov 5 16:01:39.918659 containerd[1979]: time="2025-11-05T16:01:39.918610245Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 16:01:39.918701 containerd[1979]: time="2025-11-05T16:01:39.918656067Z" level=info msg="Connect containerd service" Nov 5 16:01:39.918738 containerd[1979]: time="2025-11-05T16:01:39.918721005Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 16:01:39.926934 containerd[1979]: time="2025-11-05T16:01:39.924387478Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 16:01:39.943325 amazon-ssm-agent[2010]: 2025-11-05 16:01:39.3511 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Nov 5 16:01:39.987227 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 16:01:39.991052 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 16:01:39.996180 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 16:01:39.997120 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 16:01:40.044374 amazon-ssm-agent[2010]: 2025-11-05 16:01:39.3511 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Nov 5 16:01:40.142694 amazon-ssm-agent[2010]: 2025-11-05 16:01:39.3511 INFO [amazon-ssm-agent] Starting Core Agent Nov 5 16:01:40.146740 tar[1953]: linux-amd64/README.md Nov 5 16:01:40.168781 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 16:01:40.224911 amazon-ssm-agent[2010]: 2025/11/05 16:01:40 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 16:01:40.224911 amazon-ssm-agent[2010]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 16:01:40.225051 amazon-ssm-agent[2010]: 2025/11/05 16:01:40 processing appconfig overrides Nov 5 16:01:40.242352 amazon-ssm-agent[2010]: 2025-11-05 16:01:39.3511 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Nov 5 16:01:40.255869 amazon-ssm-agent[2010]: 2025-11-05 16:01:39.3511 INFO [Registrar] Starting registrar module Nov 5 16:01:40.255869 amazon-ssm-agent[2010]: 2025-11-05 16:01:39.3552 INFO [EC2Identity] Checking disk for registration info Nov 5 16:01:40.255869 amazon-ssm-agent[2010]: 2025-11-05 16:01:39.3553 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Nov 5 16:01:40.255869 amazon-ssm-agent[2010]: 2025-11-05 16:01:39.3553 INFO [EC2Identity] Generating registration keypair Nov 5 16:01:40.255869 amazon-ssm-agent[2010]: 2025-11-05 16:01:40.1770 INFO [EC2Identity] Checking write access before registering Nov 5 16:01:40.255869 amazon-ssm-agent[2010]: 2025-11-05 16:01:40.1774 INFO [EC2Identity] Registering EC2 instance with Systems Manager Nov 5 16:01:40.255869 amazon-ssm-agent[2010]: 2025-11-05 16:01:40.2247 INFO [EC2Identity] EC2 registration was successful. Nov 5 16:01:40.255869 amazon-ssm-agent[2010]: 2025-11-05 16:01:40.2247 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Nov 5 16:01:40.255869 amazon-ssm-agent[2010]: 2025-11-05 16:01:40.2248 INFO [CredentialRefresher] credentialRefresher has started Nov 5 16:01:40.255869 amazon-ssm-agent[2010]: 2025-11-05 16:01:40.2248 INFO [CredentialRefresher] Starting credentials refresher loop Nov 5 16:01:40.255869 amazon-ssm-agent[2010]: 2025-11-05 16:01:40.2555 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 5 16:01:40.255869 amazon-ssm-agent[2010]: 2025-11-05 16:01:40.2557 INFO [CredentialRefresher] Credentials ready Nov 5 16:01:40.342301 amazon-ssm-agent[2010]: 2025-11-05 16:01:40.2558 INFO [CredentialRefresher] Next credential rotation will be in 29.99999480336667 minutes Nov 5 16:01:40.483276 containerd[1979]: time="2025-11-05T16:01:40.482897504Z" level=info msg="Start subscribing containerd event" Nov 5 16:01:40.483276 containerd[1979]: time="2025-11-05T16:01:40.482966519Z" level=info msg="Start recovering state" Nov 5 16:01:40.483276 containerd[1979]: time="2025-11-05T16:01:40.483112820Z" level=info msg="Start event monitor" Nov 5 16:01:40.483276 containerd[1979]: time="2025-11-05T16:01:40.483128694Z" level=info msg="Start cni network conf syncer for default" Nov 5 16:01:40.483276 containerd[1979]: time="2025-11-05T16:01:40.483141265Z" level=info msg="Start streaming server" Nov 5 16:01:40.483276 containerd[1979]: time="2025-11-05T16:01:40.483164705Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 16:01:40.483276 containerd[1979]: time="2025-11-05T16:01:40.483175468Z" level=info msg="runtime interface starting up..." Nov 5 16:01:40.483276 containerd[1979]: time="2025-11-05T16:01:40.483184017Z" level=info msg="starting plugins..." Nov 5 16:01:40.483276 containerd[1979]: time="2025-11-05T16:01:40.483205144Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 16:01:40.483842 containerd[1979]: time="2025-11-05T16:01:40.483813415Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 16:01:40.484613 containerd[1979]: time="2025-11-05T16:01:40.484556736Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 16:01:40.485658 containerd[1979]: time="2025-11-05T16:01:40.484740813Z" level=info msg="containerd successfully booted in 0.663543s" Nov 5 16:01:40.484918 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 16:01:41.269616 amazon-ssm-agent[2010]: 2025-11-05 16:01:41.2694 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 5 16:01:41.372233 amazon-ssm-agent[2010]: 2025-11-05 16:01:41.2717 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2202) started Nov 5 16:01:41.472469 amazon-ssm-agent[2010]: 2025-11-05 16:01:41.2717 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 5 16:01:43.429146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:01:43.431738 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 16:01:43.433319 systemd[1]: Startup finished in 3.303s (kernel) + 10.194s (initrd) + 14.243s (userspace) = 27.741s. Nov 5 16:01:43.439686 (kubelet)[2218]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 16:01:43.890599 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 16:01:43.894266 systemd[1]: Started sshd@0-172.31.17.172:22-139.178.68.195:60766.service - OpenSSH per-connection server daemon (139.178.68.195:60766). Nov 5 16:01:44.375688 sshd[2224]: Accepted publickey for core from 139.178.68.195 port 60766 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:01:44.377500 sshd-session[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:01:44.384510 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 16:01:44.385697 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 16:01:44.396285 systemd-logind[1939]: New session 1 of user core. Nov 5 16:01:44.404587 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 16:01:44.408086 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 16:01:44.423936 (systemd)[2234]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 16:01:44.428536 systemd-logind[1939]: New session c1 of user core. Nov 5 16:01:44.587150 systemd[2234]: Queued start job for default target default.target. Nov 5 16:01:44.599717 systemd[2234]: Created slice app.slice - User Application Slice. Nov 5 16:01:44.599763 systemd[2234]: Reached target paths.target - Paths. Nov 5 16:01:44.599820 systemd[2234]: Reached target timers.target - Timers. Nov 5 16:01:44.601233 systemd[2234]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 16:01:44.614182 systemd[2234]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 16:01:44.614288 systemd[2234]: Reached target sockets.target - Sockets. Nov 5 16:01:44.614351 systemd[2234]: Reached target basic.target - Basic System. Nov 5 16:01:44.614404 systemd[2234]: Reached target default.target - Main User Target. Nov 5 16:01:44.614447 systemd[2234]: Startup finished in 177ms. Nov 5 16:01:44.614607 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 16:01:44.624537 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 16:01:44.769987 systemd[1]: Started sshd@1-172.31.17.172:22-139.178.68.195:60768.service - OpenSSH per-connection server daemon (139.178.68.195:60768). Nov 5 16:01:44.940290 sshd[2245]: Accepted publickey for core from 139.178.68.195 port 60768 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:01:44.941713 sshd-session[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:01:44.947982 systemd-logind[1939]: New session 2 of user core. Nov 5 16:01:44.950443 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 16:01:45.078207 sshd[2248]: Connection closed by 139.178.68.195 port 60768 Nov 5 16:01:45.078593 sshd-session[2245]: pam_unix(sshd:session): session closed for user core Nov 5 16:01:45.083477 systemd[1]: sshd@1-172.31.17.172:22-139.178.68.195:60768.service: Deactivated successfully. Nov 5 16:01:45.086011 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 16:01:45.087923 systemd-logind[1939]: Session 2 logged out. Waiting for processes to exit. Nov 5 16:01:45.089273 systemd-logind[1939]: Removed session 2. Nov 5 16:01:45.112539 systemd[1]: Started sshd@2-172.31.17.172:22-139.178.68.195:60770.service - OpenSSH per-connection server daemon (139.178.68.195:60770). Nov 5 16:01:45.287991 sshd[2254]: Accepted publickey for core from 139.178.68.195 port 60770 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:01:45.289505 sshd-session[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:01:45.297086 systemd-logind[1939]: New session 3 of user core. Nov 5 16:01:45.301477 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 16:01:45.406536 kubelet[2218]: E1105 16:01:45.406401 2218 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 16:01:45.409156 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 16:01:45.409312 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 16:01:45.409646 systemd[1]: kubelet.service: Consumed 1.082s CPU time, 266.5M memory peak. Nov 5 16:01:45.424870 sshd[2257]: Connection closed by 139.178.68.195 port 60770 Nov 5 16:01:45.425442 sshd-session[2254]: pam_unix(sshd:session): session closed for user core Nov 5 16:01:45.428933 systemd[1]: sshd@2-172.31.17.172:22-139.178.68.195:60770.service: Deactivated successfully. Nov 5 16:01:45.430548 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 16:01:45.431728 systemd-logind[1939]: Session 3 logged out. Waiting for processes to exit. Nov 5 16:01:45.433222 systemd-logind[1939]: Removed session 3. Nov 5 16:01:45.462665 systemd[1]: Started sshd@3-172.31.17.172:22-139.178.68.195:60778.service - OpenSSH per-connection server daemon (139.178.68.195:60778). Nov 5 16:01:46.807051 systemd-resolved[1545]: Clock change detected. Flushing caches. Nov 5 16:01:46.926789 sshd[2265]: Accepted publickey for core from 139.178.68.195 port 60778 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:01:46.928138 sshd-session[2265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:01:46.933842 systemd-logind[1939]: New session 4 of user core. Nov 5 16:01:46.940229 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 16:01:47.064208 sshd[2268]: Connection closed by 139.178.68.195 port 60778 Nov 5 16:01:47.064894 sshd-session[2265]: pam_unix(sshd:session): session closed for user core Nov 5 16:01:47.068742 systemd[1]: sshd@3-172.31.17.172:22-139.178.68.195:60778.service: Deactivated successfully. Nov 5 16:01:47.070459 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 16:01:47.071862 systemd-logind[1939]: Session 4 logged out. Waiting for processes to exit. Nov 5 16:01:47.073432 systemd-logind[1939]: Removed session 4. Nov 5 16:01:47.109929 systemd[1]: Started sshd@4-172.31.17.172:22-139.178.68.195:60780.service - OpenSSH per-connection server daemon (139.178.68.195:60780). Nov 5 16:01:47.289860 sshd[2274]: Accepted publickey for core from 139.178.68.195 port 60780 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:01:47.291478 sshd-session[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:01:47.297681 systemd-logind[1939]: New session 5 of user core. Nov 5 16:01:47.303259 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 16:01:47.443174 sudo[2278]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 16:01:47.443551 sudo[2278]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 16:01:47.457612 sudo[2278]: pam_unix(sudo:session): session closed for user root Nov 5 16:01:47.481200 sshd[2277]: Connection closed by 139.178.68.195 port 60780 Nov 5 16:01:47.481943 sshd-session[2274]: pam_unix(sshd:session): session closed for user core Nov 5 16:01:47.486208 systemd[1]: sshd@4-172.31.17.172:22-139.178.68.195:60780.service: Deactivated successfully. Nov 5 16:01:47.487815 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 16:01:47.489735 systemd-logind[1939]: Session 5 logged out. Waiting for processes to exit. Nov 5 16:01:47.490859 systemd-logind[1939]: Removed session 5. Nov 5 16:01:47.518323 systemd[1]: Started sshd@5-172.31.17.172:22-139.178.68.195:60792.service - OpenSSH per-connection server daemon (139.178.68.195:60792). Nov 5 16:01:47.702078 sshd[2284]: Accepted publickey for core from 139.178.68.195 port 60792 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:01:47.703506 sshd-session[2284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:01:47.709719 systemd-logind[1939]: New session 6 of user core. Nov 5 16:01:47.716259 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 16:01:47.815955 sudo[2289]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 16:01:47.816346 sudo[2289]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 16:01:47.822504 sudo[2289]: pam_unix(sudo:session): session closed for user root Nov 5 16:01:47.829938 sudo[2288]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 16:01:47.830332 sudo[2288]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 16:01:47.841893 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 16:01:47.883909 augenrules[2311]: No rules Nov 5 16:01:47.884903 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 16:01:47.885215 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 16:01:47.886230 sudo[2288]: pam_unix(sudo:session): session closed for user root Nov 5 16:01:47.911711 sshd[2287]: Connection closed by 139.178.68.195 port 60792 Nov 5 16:01:47.912281 sshd-session[2284]: pam_unix(sshd:session): session closed for user core Nov 5 16:01:47.916226 systemd[1]: sshd@5-172.31.17.172:22-139.178.68.195:60792.service: Deactivated successfully. Nov 5 16:01:47.917915 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 16:01:47.918683 systemd-logind[1939]: Session 6 logged out. Waiting for processes to exit. Nov 5 16:01:47.919955 systemd-logind[1939]: Removed session 6. Nov 5 16:01:47.949985 systemd[1]: Started sshd@6-172.31.17.172:22-139.178.68.195:60794.service - OpenSSH per-connection server daemon (139.178.68.195:60794). Nov 5 16:01:48.123518 sshd[2320]: Accepted publickey for core from 139.178.68.195 port 60794 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:01:48.124933 sshd-session[2320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:01:48.129881 systemd-logind[1939]: New session 7 of user core. Nov 5 16:01:48.137267 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 16:01:48.239939 sudo[2324]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 16:01:48.240227 sudo[2324]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 16:01:50.240438 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 16:01:50.251510 (dockerd)[2345]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 16:01:51.897862 dockerd[2345]: time="2025-11-05T16:01:51.897526343Z" level=info msg="Starting up" Nov 5 16:01:51.899428 dockerd[2345]: time="2025-11-05T16:01:51.899394406Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 16:01:51.911902 dockerd[2345]: time="2025-11-05T16:01:51.911851158Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 16:01:51.936207 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport142937882-merged.mount: Deactivated successfully. Nov 5 16:01:51.961377 systemd[1]: var-lib-docker-metacopy\x2dcheck1523011469-merged.mount: Deactivated successfully. Nov 5 16:01:51.983395 dockerd[2345]: time="2025-11-05T16:01:51.983097697Z" level=info msg="Loading containers: start." Nov 5 16:01:51.995048 kernel: Initializing XFRM netlink socket Nov 5 16:01:52.769295 (udev-worker)[2366]: Network interface NamePolicy= disabled on kernel command line. Nov 5 16:01:52.815928 systemd-networkd[1567]: docker0: Link UP Nov 5 16:01:52.821194 dockerd[2345]: time="2025-11-05T16:01:52.821134003Z" level=info msg="Loading containers: done." Nov 5 16:01:52.840115 dockerd[2345]: time="2025-11-05T16:01:52.840065371Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 16:01:52.840282 dockerd[2345]: time="2025-11-05T16:01:52.840153987Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 16:01:52.840282 dockerd[2345]: time="2025-11-05T16:01:52.840249929Z" level=info msg="Initializing buildkit" Nov 5 16:01:52.871338 dockerd[2345]: time="2025-11-05T16:01:52.871287064Z" level=info msg="Completed buildkit initialization" Nov 5 16:01:52.881050 dockerd[2345]: time="2025-11-05T16:01:52.880966674Z" level=info msg="Daemon has completed initialization" Nov 5 16:01:52.881223 dockerd[2345]: time="2025-11-05T16:01:52.881181085Z" level=info msg="API listen on /run/docker.sock" Nov 5 16:01:52.881407 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 16:01:52.930902 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3882904456-merged.mount: Deactivated successfully. Nov 5 16:01:54.813395 containerd[1979]: time="2025-11-05T16:01:54.813350041Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 5 16:01:55.435580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2196145296.mount: Deactivated successfully. Nov 5 16:01:56.864159 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 16:01:56.867567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:01:57.141213 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:01:57.153959 (kubelet)[2621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 16:01:57.232468 kubelet[2621]: E1105 16:01:57.231753 2621 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 16:01:57.239676 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 16:01:57.239869 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 16:01:57.240678 systemd[1]: kubelet.service: Consumed 224ms CPU time, 110.4M memory peak. Nov 5 16:01:57.266621 containerd[1979]: time="2025-11-05T16:01:57.266573066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:01:57.267663 containerd[1979]: time="2025-11-05T16:01:57.267624781Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 5 16:01:57.270130 containerd[1979]: time="2025-11-05T16:01:57.270046801Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:01:57.278094 containerd[1979]: time="2025-11-05T16:01:57.277999238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:01:57.279804 containerd[1979]: time="2025-11-05T16:01:57.279085527Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.465694618s" Nov 5 16:01:57.279804 containerd[1979]: time="2025-11-05T16:01:57.279141873Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 5 16:01:57.279804 containerd[1979]: time="2025-11-05T16:01:57.279685770Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 5 16:01:59.563180 containerd[1979]: time="2025-11-05T16:01:59.563129435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:01:59.591758 containerd[1979]: time="2025-11-05T16:01:59.591694989Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 5 16:01:59.595608 containerd[1979]: time="2025-11-05T16:01:59.595551859Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:01:59.601642 containerd[1979]: time="2025-11-05T16:01:59.601559233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:01:59.602860 containerd[1979]: time="2025-11-05T16:01:59.602699515Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 2.322980223s" Nov 5 16:01:59.602860 containerd[1979]: time="2025-11-05T16:01:59.602746720Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 5 16:01:59.603588 containerd[1979]: time="2025-11-05T16:01:59.603550652Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 5 16:02:01.080077 containerd[1979]: time="2025-11-05T16:02:01.080007733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:01.081612 containerd[1979]: time="2025-11-05T16:02:01.081562799Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 5 16:02:01.083058 containerd[1979]: time="2025-11-05T16:02:01.082734987Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:01.086237 containerd[1979]: time="2025-11-05T16:02:01.086169670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:01.087456 containerd[1979]: time="2025-11-05T16:02:01.087252957Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.483666836s" Nov 5 16:02:01.087456 containerd[1979]: time="2025-11-05T16:02:01.087297857Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 5 16:02:01.088179 containerd[1979]: time="2025-11-05T16:02:01.088153980Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 5 16:02:02.761254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1344217632.mount: Deactivated successfully. Nov 5 16:02:03.343443 containerd[1979]: time="2025-11-05T16:02:03.343381938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:03.344506 containerd[1979]: time="2025-11-05T16:02:03.344361004Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 5 16:02:03.346604 containerd[1979]: time="2025-11-05T16:02:03.345717753Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:03.347675 containerd[1979]: time="2025-11-05T16:02:03.347643461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:03.348271 containerd[1979]: time="2025-11-05T16:02:03.348244822Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.259939256s" Nov 5 16:02:03.348366 containerd[1979]: time="2025-11-05T16:02:03.348352292Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 5 16:02:03.349102 containerd[1979]: time="2025-11-05T16:02:03.349061869Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 5 16:02:03.897973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount378741403.mount: Deactivated successfully. Nov 5 16:02:04.949303 containerd[1979]: time="2025-11-05T16:02:04.949216108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:04.950433 containerd[1979]: time="2025-11-05T16:02:04.950213880Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 5 16:02:04.951258 containerd[1979]: time="2025-11-05T16:02:04.951227324Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:04.954856 containerd[1979]: time="2025-11-05T16:02:04.954812933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:04.960447 containerd[1979]: time="2025-11-05T16:02:04.960399181Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.611307575s" Nov 5 16:02:04.961072 containerd[1979]: time="2025-11-05T16:02:04.960719448Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 5 16:02:04.961555 containerd[1979]: time="2025-11-05T16:02:04.961519185Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 16:02:05.402648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1010169502.mount: Deactivated successfully. Nov 5 16:02:05.409154 containerd[1979]: time="2025-11-05T16:02:05.409084390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 16:02:05.410008 containerd[1979]: time="2025-11-05T16:02:05.409962783Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 5 16:02:05.412169 containerd[1979]: time="2025-11-05T16:02:05.412067040Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 16:02:05.414290 containerd[1979]: time="2025-11-05T16:02:05.414231001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 16:02:05.415038 containerd[1979]: time="2025-11-05T16:02:05.414806156Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 453.254815ms" Nov 5 16:02:05.415038 containerd[1979]: time="2025-11-05T16:02:05.414838306Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 5 16:02:05.415577 containerd[1979]: time="2025-11-05T16:02:05.415547541Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 5 16:02:05.962360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3810602347.mount: Deactivated successfully. Nov 5 16:02:07.364182 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 16:02:07.366095 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:02:08.083756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:02:08.095461 (kubelet)[2760]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 16:02:08.224348 kubelet[2760]: E1105 16:02:08.224285 2760 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 16:02:08.228949 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 16:02:08.229337 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 16:02:08.230761 systemd[1]: kubelet.service: Consumed 219ms CPU time, 107.8M memory peak. Nov 5 16:02:09.042134 containerd[1979]: time="2025-11-05T16:02:09.042068142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:09.043399 containerd[1979]: time="2025-11-05T16:02:09.043231441Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 5 16:02:09.044432 containerd[1979]: time="2025-11-05T16:02:09.044397781Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:09.047239 containerd[1979]: time="2025-11-05T16:02:09.047181599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:09.048353 containerd[1979]: time="2025-11-05T16:02:09.048181095Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.632601924s" Nov 5 16:02:09.048353 containerd[1979]: time="2025-11-05T16:02:09.048216105Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 5 16:02:10.918625 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 5 16:02:11.978594 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:02:11.978869 systemd[1]: kubelet.service: Consumed 219ms CPU time, 107.8M memory peak. Nov 5 16:02:11.982164 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:02:12.018887 systemd[1]: Reload requested from client PID 2799 ('systemctl') (unit session-7.scope)... Nov 5 16:02:12.018913 systemd[1]: Reloading... Nov 5 16:02:12.172058 zram_generator::config[2847]: No configuration found. Nov 5 16:02:12.442575 systemd[1]: Reloading finished in 423 ms. Nov 5 16:02:12.501662 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 16:02:12.501760 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 16:02:12.502129 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:02:12.505855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:02:12.997126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:02:13.008495 (kubelet)[2904]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 16:02:13.063551 kubelet[2904]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 16:02:13.063551 kubelet[2904]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 16:02:13.063551 kubelet[2904]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 16:02:13.067602 kubelet[2904]: I1105 16:02:13.067520 2904 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 16:02:13.639052 kubelet[2904]: I1105 16:02:13.638096 2904 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 5 16:02:13.639052 kubelet[2904]: I1105 16:02:13.638137 2904 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 16:02:13.639052 kubelet[2904]: I1105 16:02:13.638898 2904 server.go:954] "Client rotation is on, will bootstrap in background" Nov 5 16:02:13.713174 kubelet[2904]: E1105 16:02:13.713128 2904 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.17.172:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.172:6443: connect: connection refused" logger="UnhandledError" Nov 5 16:02:13.713503 kubelet[2904]: I1105 16:02:13.713475 2904 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 16:02:13.743235 kubelet[2904]: I1105 16:02:13.743205 2904 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 16:02:13.750532 kubelet[2904]: I1105 16:02:13.750493 2904 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 16:02:13.761297 kubelet[2904]: I1105 16:02:13.761211 2904 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 16:02:13.761504 kubelet[2904]: I1105 16:02:13.761283 2904 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-172","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 16:02:13.764177 kubelet[2904]: I1105 16:02:13.764112 2904 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 16:02:13.764177 kubelet[2904]: I1105 16:02:13.764165 2904 container_manager_linux.go:304] "Creating device plugin manager" Nov 5 16:02:13.765997 kubelet[2904]: I1105 16:02:13.765955 2904 state_mem.go:36] "Initialized new in-memory state store" Nov 5 16:02:13.773338 kubelet[2904]: I1105 16:02:13.771844 2904 kubelet.go:446] "Attempting to sync node with API server" Nov 5 16:02:13.773338 kubelet[2904]: I1105 16:02:13.771887 2904 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 16:02:13.774494 kubelet[2904]: I1105 16:02:13.773530 2904 kubelet.go:352] "Adding apiserver pod source" Nov 5 16:02:13.774494 kubelet[2904]: I1105 16:02:13.773566 2904 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 16:02:13.787071 kubelet[2904]: W1105 16:02:13.785484 2904 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.172:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.172:6443: connect: connection refused Nov 5 16:02:13.787071 kubelet[2904]: E1105 16:02:13.786812 2904 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.172:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.172:6443: connect: connection refused" logger="UnhandledError" Nov 5 16:02:13.787071 kubelet[2904]: W1105 16:02:13.786958 2904 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.172:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-172&limit=500&resourceVersion=0": dial tcp 172.31.17.172:6443: connect: connection refused Nov 5 16:02:13.787071 kubelet[2904]: E1105 16:02:13.787008 2904 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.172:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-172&limit=500&resourceVersion=0\": dial tcp 172.31.17.172:6443: connect: connection refused" logger="UnhandledError" Nov 5 16:02:13.790061 kubelet[2904]: I1105 16:02:13.788938 2904 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 16:02:13.794501 kubelet[2904]: I1105 16:02:13.794337 2904 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 5 16:02:13.795952 kubelet[2904]: W1105 16:02:13.795461 2904 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 16:02:13.805188 kubelet[2904]: I1105 16:02:13.805150 2904 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 16:02:13.805188 kubelet[2904]: I1105 16:02:13.805196 2904 server.go:1287] "Started kubelet" Nov 5 16:02:13.806973 kubelet[2904]: I1105 16:02:13.806529 2904 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 16:02:13.810314 kubelet[2904]: I1105 16:02:13.810280 2904 server.go:479] "Adding debug handlers to kubelet server" Nov 5 16:02:13.813278 kubelet[2904]: I1105 16:02:13.813188 2904 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 16:02:13.813737 kubelet[2904]: I1105 16:02:13.813701 2904 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 16:02:13.817086 kubelet[2904]: I1105 16:02:13.817064 2904 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 16:02:13.819763 kubelet[2904]: E1105 16:02:13.815615 2904 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.172:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.172:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-172.187527bfc655e966 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-172,UID:ip-172-31-17-172,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-172,},FirstTimestamp:2025-11-05 16:02:13.805173094 +0000 UTC m=+0.791560847,LastTimestamp:2025-11-05 16:02:13.805173094 +0000 UTC m=+0.791560847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-172,}" Nov 5 16:02:13.819991 kubelet[2904]: I1105 16:02:13.819963 2904 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 16:02:13.824865 kubelet[2904]: E1105 16:02:13.823973 2904 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-172\" not found" Nov 5 16:02:13.824865 kubelet[2904]: I1105 16:02:13.824092 2904 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 16:02:13.826417 kubelet[2904]: I1105 16:02:13.825562 2904 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 16:02:13.826417 kubelet[2904]: I1105 16:02:13.825637 2904 reconciler.go:26] "Reconciler: start to sync state" Nov 5 16:02:13.826417 kubelet[2904]: W1105 16:02:13.826126 2904 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.172:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.172:6443: connect: connection refused Nov 5 16:02:13.826417 kubelet[2904]: E1105 16:02:13.826190 2904 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.172:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.172:6443: connect: connection refused" logger="UnhandledError" Nov 5 16:02:13.826965 kubelet[2904]: E1105 16:02:13.826922 2904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.172:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-172?timeout=10s\": dial tcp 172.31.17.172:6443: connect: connection refused" interval="200ms" Nov 5 16:02:13.843357 kubelet[2904]: I1105 16:02:13.842873 2904 factory.go:221] Registration of the systemd container factory successfully Nov 5 16:02:13.843357 kubelet[2904]: I1105 16:02:13.842997 2904 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 16:02:13.854380 kubelet[2904]: I1105 16:02:13.854349 2904 factory.go:221] Registration of the containerd container factory successfully Nov 5 16:02:13.857339 kubelet[2904]: I1105 16:02:13.857154 2904 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 5 16:02:13.862178 kubelet[2904]: I1105 16:02:13.862140 2904 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 5 16:02:13.862178 kubelet[2904]: I1105 16:02:13.862179 2904 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 5 16:02:13.862343 kubelet[2904]: I1105 16:02:13.862204 2904 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 16:02:13.862343 kubelet[2904]: I1105 16:02:13.862213 2904 kubelet.go:2382] "Starting kubelet main sync loop" Nov 5 16:02:13.862343 kubelet[2904]: E1105 16:02:13.862265 2904 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 16:02:13.875945 kubelet[2904]: E1105 16:02:13.875914 2904 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 16:02:13.876244 kubelet[2904]: W1105 16:02:13.876189 2904 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.172:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.172:6443: connect: connection refused Nov 5 16:02:13.876346 kubelet[2904]: E1105 16:02:13.876254 2904 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.172:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.172:6443: connect: connection refused" logger="UnhandledError" Nov 5 16:02:13.893214 kubelet[2904]: I1105 16:02:13.893104 2904 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 16:02:13.893214 kubelet[2904]: I1105 16:02:13.893123 2904 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 16:02:13.893214 kubelet[2904]: I1105 16:02:13.893146 2904 state_mem.go:36] "Initialized new in-memory state store" Nov 5 16:02:13.896836 kubelet[2904]: I1105 16:02:13.896799 2904 policy_none.go:49] "None policy: Start" Nov 5 16:02:13.896836 kubelet[2904]: I1105 16:02:13.896831 2904 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 16:02:13.896836 kubelet[2904]: I1105 16:02:13.896848 2904 state_mem.go:35] "Initializing new in-memory state store" Nov 5 16:02:13.903658 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 16:02:13.924355 kubelet[2904]: E1105 16:02:13.924317 2904 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-172\" not found" Nov 5 16:02:13.927904 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 16:02:13.932281 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 16:02:13.944740 kubelet[2904]: I1105 16:02:13.944706 2904 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 5 16:02:13.944935 kubelet[2904]: I1105 16:02:13.944911 2904 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 16:02:13.945075 kubelet[2904]: I1105 16:02:13.944921 2904 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 16:02:13.946931 kubelet[2904]: I1105 16:02:13.946908 2904 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 16:02:13.949509 kubelet[2904]: E1105 16:02:13.949483 2904 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 16:02:13.949625 kubelet[2904]: E1105 16:02:13.949531 2904 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-172\" not found" Nov 5 16:02:13.985722 systemd[1]: Created slice kubepods-burstable-pod93f29b18ba94e8fe3a773da370904ece.slice - libcontainer container kubepods-burstable-pod93f29b18ba94e8fe3a773da370904ece.slice. Nov 5 16:02:14.006364 kubelet[2904]: E1105 16:02:14.006298 2904 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-172\" not found" node="ip-172-31-17-172" Nov 5 16:02:14.014922 systemd[1]: Created slice kubepods-burstable-pod05c8c459f40e285ebb0875bab5fb5676.slice - libcontainer container kubepods-burstable-pod05c8c459f40e285ebb0875bab5fb5676.slice. Nov 5 16:02:14.018203 kubelet[2904]: E1105 16:02:14.017934 2904 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-172\" not found" node="ip-172-31-17-172" Nov 5 16:02:14.020885 systemd[1]: Created slice kubepods-burstable-podb5431268ab7ded5a3298878b5483a92a.slice - libcontainer container kubepods-burstable-podb5431268ab7ded5a3298878b5483a92a.slice. Nov 5 16:02:14.023347 kubelet[2904]: E1105 16:02:14.023315 2904 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-172\" not found" node="ip-172-31-17-172" Nov 5 16:02:14.027877 kubelet[2904]: E1105 16:02:14.027827 2904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.172:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-172?timeout=10s\": dial tcp 172.31.17.172:6443: connect: connection refused" interval="400ms" Nov 5 16:02:14.047209 kubelet[2904]: I1105 16:02:14.047177 2904 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-172" Nov 5 16:02:14.047701 kubelet[2904]: E1105 16:02:14.047663 2904 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.172:6443/api/v1/nodes\": dial tcp 172.31.17.172:6443: connect: connection refused" node="ip-172-31-17-172" Nov 5 16:02:14.127256 kubelet[2904]: I1105 16:02:14.127202 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/93f29b18ba94e8fe3a773da370904ece-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-172\" (UID: \"93f29b18ba94e8fe3a773da370904ece\") " pod="kube-system/kube-apiserver-ip-172-31-17-172" Nov 5 16:02:14.127256 kubelet[2904]: I1105 16:02:14.127250 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/93f29b18ba94e8fe3a773da370904ece-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-172\" (UID: \"93f29b18ba94e8fe3a773da370904ece\") " pod="kube-system/kube-apiserver-ip-172-31-17-172" Nov 5 16:02:14.127768 kubelet[2904]: I1105 16:02:14.127277 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05c8c459f40e285ebb0875bab5fb5676-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-172\" (UID: \"05c8c459f40e285ebb0875bab5fb5676\") " pod="kube-system/kube-controller-manager-ip-172-31-17-172" Nov 5 16:02:14.127768 kubelet[2904]: I1105 16:02:14.127299 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/05c8c459f40e285ebb0875bab5fb5676-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-172\" (UID: \"05c8c459f40e285ebb0875bab5fb5676\") " pod="kube-system/kube-controller-manager-ip-172-31-17-172" Nov 5 16:02:14.127768 kubelet[2904]: I1105 16:02:14.127322 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05c8c459f40e285ebb0875bab5fb5676-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-172\" (UID: \"05c8c459f40e285ebb0875bab5fb5676\") " pod="kube-system/kube-controller-manager-ip-172-31-17-172" Nov 5 16:02:14.127768 kubelet[2904]: I1105 16:02:14.127342 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05c8c459f40e285ebb0875bab5fb5676-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-172\" (UID: \"05c8c459f40e285ebb0875bab5fb5676\") " pod="kube-system/kube-controller-manager-ip-172-31-17-172" Nov 5 16:02:14.127768 kubelet[2904]: I1105 16:02:14.127363 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05c8c459f40e285ebb0875bab5fb5676-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-172\" (UID: \"05c8c459f40e285ebb0875bab5fb5676\") " pod="kube-system/kube-controller-manager-ip-172-31-17-172" Nov 5 16:02:14.127904 kubelet[2904]: I1105 16:02:14.127425 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/93f29b18ba94e8fe3a773da370904ece-ca-certs\") pod \"kube-apiserver-ip-172-31-17-172\" (UID: \"93f29b18ba94e8fe3a773da370904ece\") " pod="kube-system/kube-apiserver-ip-172-31-17-172" Nov 5 16:02:14.127904 kubelet[2904]: I1105 16:02:14.127453 2904 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b5431268ab7ded5a3298878b5483a92a-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-172\" (UID: \"b5431268ab7ded5a3298878b5483a92a\") " pod="kube-system/kube-scheduler-ip-172-31-17-172" Nov 5 16:02:14.249748 kubelet[2904]: I1105 16:02:14.249644 2904 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-172" Nov 5 16:02:14.251228 kubelet[2904]: E1105 16:02:14.251186 2904 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.172:6443/api/v1/nodes\": dial tcp 172.31.17.172:6443: connect: connection refused" node="ip-172-31-17-172" Nov 5 16:02:14.309269 containerd[1979]: time="2025-11-05T16:02:14.309216211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-172,Uid:93f29b18ba94e8fe3a773da370904ece,Namespace:kube-system,Attempt:0,}" Nov 5 16:02:14.320012 containerd[1979]: time="2025-11-05T16:02:14.319937235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-172,Uid:05c8c459f40e285ebb0875bab5fb5676,Namespace:kube-system,Attempt:0,}" Nov 5 16:02:14.325530 containerd[1979]: time="2025-11-05T16:02:14.325485195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-172,Uid:b5431268ab7ded5a3298878b5483a92a,Namespace:kube-system,Attempt:0,}" Nov 5 16:02:14.430208 kubelet[2904]: E1105 16:02:14.430065 2904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.172:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-172?timeout=10s\": dial tcp 172.31.17.172:6443: connect: connection refused" interval="800ms" Nov 5 16:02:14.462571 containerd[1979]: time="2025-11-05T16:02:14.462508741Z" level=info msg="connecting to shim ef0c0e053d8789d2cec19f8040844870d5a09c9400041602e5b7f3d0ac2323a8" address="unix:///run/containerd/s/12ef528e9744596dcae4f9312f15e8ce38c9ca62a9292953c87a5cf605586f0f" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:02:14.474778 containerd[1979]: time="2025-11-05T16:02:14.474673002Z" level=info msg="connecting to shim 647ccb8d26cd632f79b694c6d2bb111bb8a876a4b3c5e1290b4cdebe26a61d72" address="unix:///run/containerd/s/5b5d91ef0c0b172f59c772af7b6a464988e6123a108e5de829ee399ff5de294a" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:02:14.480061 containerd[1979]: time="2025-11-05T16:02:14.479677273Z" level=info msg="connecting to shim a8910b93862583903af20aefe5347d34e50d3d751af5b3fb05ed657155899b9c" address="unix:///run/containerd/s/80308aa1adb71c4326be0a4c15ec2c02c9cf14c5423d27bdcaae06382b2328e5" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:02:14.599323 systemd[1]: Started cri-containerd-ef0c0e053d8789d2cec19f8040844870d5a09c9400041602e5b7f3d0ac2323a8.scope - libcontainer container ef0c0e053d8789d2cec19f8040844870d5a09c9400041602e5b7f3d0ac2323a8. Nov 5 16:02:14.607008 systemd[1]: Started cri-containerd-647ccb8d26cd632f79b694c6d2bb111bb8a876a4b3c5e1290b4cdebe26a61d72.scope - libcontainer container 647ccb8d26cd632f79b694c6d2bb111bb8a876a4b3c5e1290b4cdebe26a61d72. Nov 5 16:02:14.608975 systemd[1]: Started cri-containerd-a8910b93862583903af20aefe5347d34e50d3d751af5b3fb05ed657155899b9c.scope - libcontainer container a8910b93862583903af20aefe5347d34e50d3d751af5b3fb05ed657155899b9c. Nov 5 16:02:14.654259 kubelet[2904]: I1105 16:02:14.654232 2904 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-172" Nov 5 16:02:14.657320 kubelet[2904]: E1105 16:02:14.657257 2904 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.172:6443/api/v1/nodes\": dial tcp 172.31.17.172:6443: connect: connection refused" node="ip-172-31-17-172" Nov 5 16:02:14.712698 kubelet[2904]: W1105 16:02:14.712323 2904 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.172:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.172:6443: connect: connection refused Nov 5 16:02:14.712698 kubelet[2904]: E1105 16:02:14.712411 2904 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.172:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.172:6443: connect: connection refused" logger="UnhandledError" Nov 5 16:02:14.746430 containerd[1979]: time="2025-11-05T16:02:14.746390159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-172,Uid:93f29b18ba94e8fe3a773da370904ece,Namespace:kube-system,Attempt:0,} returns sandbox id \"647ccb8d26cd632f79b694c6d2bb111bb8a876a4b3c5e1290b4cdebe26a61d72\"" Nov 5 16:02:14.752790 containerd[1979]: time="2025-11-05T16:02:14.752746153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-172,Uid:b5431268ab7ded5a3298878b5483a92a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8910b93862583903af20aefe5347d34e50d3d751af5b3fb05ed657155899b9c\"" Nov 5 16:02:14.755952 containerd[1979]: time="2025-11-05T16:02:14.755913686Z" level=info msg="CreateContainer within sandbox \"647ccb8d26cd632f79b694c6d2bb111bb8a876a4b3c5e1290b4cdebe26a61d72\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 16:02:14.757205 containerd[1979]: time="2025-11-05T16:02:14.757160208Z" level=info msg="CreateContainer within sandbox \"a8910b93862583903af20aefe5347d34e50d3d751af5b3fb05ed657155899b9c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 16:02:14.783298 containerd[1979]: time="2025-11-05T16:02:14.783185585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-172,Uid:05c8c459f40e285ebb0875bab5fb5676,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef0c0e053d8789d2cec19f8040844870d5a09c9400041602e5b7f3d0ac2323a8\"" Nov 5 16:02:14.786437 containerd[1979]: time="2025-11-05T16:02:14.786391771Z" level=info msg="CreateContainer within sandbox \"ef0c0e053d8789d2cec19f8040844870d5a09c9400041602e5b7f3d0ac2323a8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 16:02:14.805458 containerd[1979]: time="2025-11-05T16:02:14.805310712Z" level=info msg="Container 4bc0f58cb05d04b0c0b374e04b3e06da943c4e1f7035e5a68b8923985d517d05: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:02:14.805458 containerd[1979]: time="2025-11-05T16:02:14.805337805Z" level=info msg="Container 32064864a6df86725c1619a9e55ee870728fe9b7983df8882ebf6703e2278493: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:02:14.807887 containerd[1979]: time="2025-11-05T16:02:14.807843742Z" level=info msg="Container 24dc22faa365599b80e9164ce9cab371681b0489d0f4d178549e580961228ddc: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:02:14.822461 containerd[1979]: time="2025-11-05T16:02:14.822349693Z" level=info msg="CreateContainer within sandbox \"a8910b93862583903af20aefe5347d34e50d3d751af5b3fb05ed657155899b9c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4bc0f58cb05d04b0c0b374e04b3e06da943c4e1f7035e5a68b8923985d517d05\"" Nov 5 16:02:14.824866 containerd[1979]: time="2025-11-05T16:02:14.824825142Z" level=info msg="StartContainer for \"4bc0f58cb05d04b0c0b374e04b3e06da943c4e1f7035e5a68b8923985d517d05\"" Nov 5 16:02:14.829220 containerd[1979]: time="2025-11-05T16:02:14.829165170Z" level=info msg="connecting to shim 4bc0f58cb05d04b0c0b374e04b3e06da943c4e1f7035e5a68b8923985d517d05" address="unix:///run/containerd/s/80308aa1adb71c4326be0a4c15ec2c02c9cf14c5423d27bdcaae06382b2328e5" protocol=ttrpc version=3 Nov 5 16:02:14.835625 containerd[1979]: time="2025-11-05T16:02:14.835580067Z" level=info msg="CreateContainer within sandbox \"ef0c0e053d8789d2cec19f8040844870d5a09c9400041602e5b7f3d0ac2323a8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"32064864a6df86725c1619a9e55ee870728fe9b7983df8882ebf6703e2278493\"" Nov 5 16:02:14.837713 containerd[1979]: time="2025-11-05T16:02:14.837675540Z" level=info msg="StartContainer for \"32064864a6df86725c1619a9e55ee870728fe9b7983df8882ebf6703e2278493\"" Nov 5 16:02:14.839720 containerd[1979]: time="2025-11-05T16:02:14.839681035Z" level=info msg="connecting to shim 32064864a6df86725c1619a9e55ee870728fe9b7983df8882ebf6703e2278493" address="unix:///run/containerd/s/12ef528e9744596dcae4f9312f15e8ce38c9ca62a9292953c87a5cf605586f0f" protocol=ttrpc version=3 Nov 5 16:02:14.847399 containerd[1979]: time="2025-11-05T16:02:14.847338116Z" level=info msg="CreateContainer within sandbox \"647ccb8d26cd632f79b694c6d2bb111bb8a876a4b3c5e1290b4cdebe26a61d72\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"24dc22faa365599b80e9164ce9cab371681b0489d0f4d178549e580961228ddc\"" Nov 5 16:02:14.848308 containerd[1979]: time="2025-11-05T16:02:14.848248951Z" level=info msg="StartContainer for \"24dc22faa365599b80e9164ce9cab371681b0489d0f4d178549e580961228ddc\"" Nov 5 16:02:14.851470 containerd[1979]: time="2025-11-05T16:02:14.849999139Z" level=info msg="connecting to shim 24dc22faa365599b80e9164ce9cab371681b0489d0f4d178549e580961228ddc" address="unix:///run/containerd/s/5b5d91ef0c0b172f59c772af7b6a464988e6123a108e5de829ee399ff5de294a" protocol=ttrpc version=3 Nov 5 16:02:14.873309 systemd[1]: Started cri-containerd-4bc0f58cb05d04b0c0b374e04b3e06da943c4e1f7035e5a68b8923985d517d05.scope - libcontainer container 4bc0f58cb05d04b0c0b374e04b3e06da943c4e1f7035e5a68b8923985d517d05. Nov 5 16:02:14.885277 systemd[1]: Started cri-containerd-32064864a6df86725c1619a9e55ee870728fe9b7983df8882ebf6703e2278493.scope - libcontainer container 32064864a6df86725c1619a9e55ee870728fe9b7983df8882ebf6703e2278493. Nov 5 16:02:14.895403 systemd[1]: Started cri-containerd-24dc22faa365599b80e9164ce9cab371681b0489d0f4d178549e580961228ddc.scope - libcontainer container 24dc22faa365599b80e9164ce9cab371681b0489d0f4d178549e580961228ddc. Nov 5 16:02:14.982666 kubelet[2904]: W1105 16:02:14.982599 2904 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.172:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-172&limit=500&resourceVersion=0": dial tcp 172.31.17.172:6443: connect: connection refused Nov 5 16:02:14.982903 kubelet[2904]: E1105 16:02:14.982874 2904 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.172:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-172&limit=500&resourceVersion=0\": dial tcp 172.31.17.172:6443: connect: connection refused" logger="UnhandledError" Nov 5 16:02:15.000893 containerd[1979]: time="2025-11-05T16:02:15.000851940Z" level=info msg="StartContainer for \"24dc22faa365599b80e9164ce9cab371681b0489d0f4d178549e580961228ddc\" returns successfully" Nov 5 16:02:15.029273 containerd[1979]: time="2025-11-05T16:02:15.029234772Z" level=info msg="StartContainer for \"4bc0f58cb05d04b0c0b374e04b3e06da943c4e1f7035e5a68b8923985d517d05\" returns successfully" Nov 5 16:02:15.041781 containerd[1979]: time="2025-11-05T16:02:15.041748062Z" level=info msg="StartContainer for \"32064864a6df86725c1619a9e55ee870728fe9b7983df8882ebf6703e2278493\" returns successfully" Nov 5 16:02:15.181537 kubelet[2904]: W1105 16:02:15.181420 2904 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.172:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.172:6443: connect: connection refused Nov 5 16:02:15.181537 kubelet[2904]: E1105 16:02:15.181506 2904 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.172:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.172:6443: connect: connection refused" logger="UnhandledError" Nov 5 16:02:15.230697 kubelet[2904]: E1105 16:02:15.230648 2904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.172:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-172?timeout=10s\": dial tcp 172.31.17.172:6443: connect: connection refused" interval="1.6s" Nov 5 16:02:15.378902 kubelet[2904]: W1105 16:02:15.378770 2904 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.172:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.172:6443: connect: connection refused Nov 5 16:02:15.378902 kubelet[2904]: E1105 16:02:15.378869 2904 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.172:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.172:6443: connect: connection refused" logger="UnhandledError" Nov 5 16:02:15.461358 kubelet[2904]: I1105 16:02:15.461231 2904 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-172" Nov 5 16:02:15.462375 kubelet[2904]: E1105 16:02:15.462340 2904 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.172:6443/api/v1/nodes\": dial tcp 172.31.17.172:6443: connect: connection refused" node="ip-172-31-17-172" Nov 5 16:02:15.810542 kubelet[2904]: E1105 16:02:15.810271 2904 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.17.172:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.172:6443: connect: connection refused" logger="UnhandledError" Nov 5 16:02:15.928347 kubelet[2904]: E1105 16:02:15.928004 2904 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-172\" not found" node="ip-172-31-17-172" Nov 5 16:02:15.930812 kubelet[2904]: E1105 16:02:15.930769 2904 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-172\" not found" node="ip-172-31-17-172" Nov 5 16:02:15.936667 kubelet[2904]: E1105 16:02:15.936352 2904 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-172\" not found" node="ip-172-31-17-172" Nov 5 16:02:16.831436 kubelet[2904]: E1105 16:02:16.831332 2904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.172:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-172?timeout=10s\": dial tcp 172.31.17.172:6443: connect: connection refused" interval="3.2s" Nov 5 16:02:16.936749 kubelet[2904]: E1105 16:02:16.936695 2904 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-172\" not found" node="ip-172-31-17-172" Nov 5 16:02:16.937488 kubelet[2904]: E1105 16:02:16.937323 2904 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-172\" not found" node="ip-172-31-17-172" Nov 5 16:02:16.937488 kubelet[2904]: E1105 16:02:16.937368 2904 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-172\" not found" node="ip-172-31-17-172" Nov 5 16:02:17.065202 kubelet[2904]: I1105 16:02:17.065173 2904 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-172" Nov 5 16:02:17.065581 kubelet[2904]: E1105 16:02:17.065545 2904 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.172:6443/api/v1/nodes\": dial tcp 172.31.17.172:6443: connect: connection refused" node="ip-172-31-17-172" Nov 5 16:02:17.586047 kubelet[2904]: W1105 16:02:17.585960 2904 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.172:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.172:6443: connect: connection refused Nov 5 16:02:17.586047 kubelet[2904]: E1105 16:02:17.586016 2904 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.172:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.172:6443: connect: connection refused" logger="UnhandledError" Nov 5 16:02:17.601160 kubelet[2904]: W1105 16:02:17.601118 2904 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.172:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-172&limit=500&resourceVersion=0": dial tcp 172.31.17.172:6443: connect: connection refused Nov 5 16:02:17.601320 kubelet[2904]: E1105 16:02:17.601171 2904 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.172:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-172&limit=500&resourceVersion=0\": dial tcp 172.31.17.172:6443: connect: connection refused" logger="UnhandledError" Nov 5 16:02:17.758598 kubelet[2904]: W1105 16:02:17.758553 2904 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.172:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.172:6443: connect: connection refused Nov 5 16:02:17.758598 kubelet[2904]: E1105 16:02:17.758601 2904 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.172:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.172:6443: connect: connection refused" logger="UnhandledError" Nov 5 16:02:17.939329 kubelet[2904]: E1105 16:02:17.939295 2904 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-172\" not found" node="ip-172-31-17-172" Nov 5 16:02:17.939770 kubelet[2904]: E1105 16:02:17.939757 2904 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-172\" not found" node="ip-172-31-17-172" Nov 5 16:02:18.085787 kubelet[2904]: W1105 16:02:18.085604 2904 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.172:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.172:6443: connect: connection refused Nov 5 16:02:18.085953 kubelet[2904]: E1105 16:02:18.085797 2904 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.172:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.172:6443: connect: connection refused" logger="UnhandledError" Nov 5 16:02:19.113648 kubelet[2904]: E1105 16:02:19.113529 2904 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.172:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.172:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-172.187527bfc655e966 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-172,UID:ip-172-31-17-172,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-172,},FirstTimestamp:2025-11-05 16:02:13.805173094 +0000 UTC m=+0.791560847,LastTimestamp:2025-11-05 16:02:13.805173094 +0000 UTC m=+0.791560847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-172,}" Nov 5 16:02:20.268244 kubelet[2904]: I1105 16:02:20.267915 2904 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-172" Nov 5 16:02:21.607310 kubelet[2904]: I1105 16:02:21.607266 2904 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-172" Nov 5 16:02:21.607787 kubelet[2904]: E1105 16:02:21.607327 2904 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-17-172\": node \"ip-172-31-17-172\" not found" Nov 5 16:02:21.634393 kubelet[2904]: E1105 16:02:21.634346 2904 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-172\" not found" Nov 5 16:02:21.674049 kubelet[2904]: E1105 16:02:21.673835 2904 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-172\" not found" node="ip-172-31-17-172" Nov 5 16:02:21.734754 kubelet[2904]: E1105 16:02:21.734706 2904 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-172\" not found" Nov 5 16:02:21.835424 kubelet[2904]: E1105 16:02:21.835375 2904 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-172\" not found" Nov 5 16:02:21.926831 kubelet[2904]: I1105 16:02:21.926774 2904 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-172" Nov 5 16:02:21.937098 kubelet[2904]: E1105 16:02:21.935892 2904 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-172\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-17-172" Nov 5 16:02:21.937098 kubelet[2904]: I1105 16:02:21.935934 2904 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-172" Nov 5 16:02:21.947010 kubelet[2904]: E1105 16:02:21.946976 2904 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-172\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-17-172" Nov 5 16:02:21.947010 kubelet[2904]: I1105 16:02:21.947012 2904 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-172" Nov 5 16:02:21.949302 kubelet[2904]: E1105 16:02:21.949269 2904 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-172\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-17-172" Nov 5 16:02:22.769248 kubelet[2904]: I1105 16:02:22.768968 2904 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-172" Nov 5 16:02:22.780858 kubelet[2904]: I1105 16:02:22.780792 2904 apiserver.go:52] "Watching apiserver" Nov 5 16:02:22.826556 kubelet[2904]: I1105 16:02:22.826496 2904 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 16:02:24.006857 systemd[1]: Reload requested from client PID 3168 ('systemctl') (unit session-7.scope)... Nov 5 16:02:24.006879 systemd[1]: Reloading... Nov 5 16:02:24.141054 zram_generator::config[3209]: No configuration found. Nov 5 16:02:24.514506 systemd[1]: Reloading finished in 507 ms. Nov 5 16:02:24.545902 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:02:24.557543 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 16:02:24.557948 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:02:24.558130 systemd[1]: kubelet.service: Consumed 1.274s CPU time, 129.7M memory peak. Nov 5 16:02:24.560331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:02:24.823723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:02:24.839564 (kubelet)[3273]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 16:02:24.926888 kubelet[3273]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 16:02:24.926888 kubelet[3273]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 16:02:24.926888 kubelet[3273]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 16:02:24.926888 kubelet[3273]: I1105 16:02:24.926819 3273 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 16:02:24.936963 kubelet[3273]: I1105 16:02:24.936766 3273 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 5 16:02:24.936963 kubelet[3273]: I1105 16:02:24.936800 3273 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 16:02:24.937470 kubelet[3273]: I1105 16:02:24.937450 3273 server.go:954] "Client rotation is on, will bootstrap in background" Nov 5 16:02:24.940471 kubelet[3273]: I1105 16:02:24.940440 3273 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 5 16:02:24.951186 kubelet[3273]: I1105 16:02:24.951070 3273 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 16:02:24.964236 kubelet[3273]: I1105 16:02:24.964182 3273 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 16:02:24.969413 kubelet[3273]: I1105 16:02:24.968766 3273 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 16:02:24.972227 kubelet[3273]: I1105 16:02:24.971906 3273 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 16:02:24.972368 kubelet[3273]: I1105 16:02:24.972011 3273 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-172","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 16:02:24.972368 kubelet[3273]: I1105 16:02:24.972354 3273 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 16:02:24.972548 kubelet[3273]: I1105 16:02:24.972370 3273 container_manager_linux.go:304] "Creating device plugin manager" Nov 5 16:02:24.975400 update_engine[1940]: I20251105 16:02:24.974063 1940 update_attempter.cc:509] Updating boot flags... Nov 5 16:02:24.978522 kubelet[3273]: I1105 16:02:24.976839 3273 state_mem.go:36] "Initialized new in-memory state store" Nov 5 16:02:24.978522 kubelet[3273]: I1105 16:02:24.977177 3273 kubelet.go:446] "Attempting to sync node with API server" Nov 5 16:02:24.979091 kubelet[3273]: I1105 16:02:24.979051 3273 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 16:02:24.979208 kubelet[3273]: I1105 16:02:24.979108 3273 kubelet.go:352] "Adding apiserver pod source" Nov 5 16:02:24.979208 kubelet[3273]: I1105 16:02:24.979132 3273 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 16:02:24.990047 kubelet[3273]: I1105 16:02:24.989207 3273 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 16:02:24.995130 kubelet[3273]: I1105 16:02:24.995098 3273 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 5 16:02:25.025369 kubelet[3273]: I1105 16:02:25.025240 3273 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 16:02:25.025369 kubelet[3273]: I1105 16:02:25.025316 3273 server.go:1287] "Started kubelet" Nov 5 16:02:25.053133 kubelet[3273]: I1105 16:02:25.051581 3273 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 16:02:25.053133 kubelet[3273]: I1105 16:02:25.052372 3273 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 16:02:25.059524 kubelet[3273]: I1105 16:02:25.059494 3273 server.go:479] "Adding debug handlers to kubelet server" Nov 5 16:02:25.062058 kubelet[3273]: I1105 16:02:25.061557 3273 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 16:02:25.062058 kubelet[3273]: I1105 16:02:25.061935 3273 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 16:02:25.064738 kubelet[3273]: I1105 16:02:25.064695 3273 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 16:02:25.065530 kubelet[3273]: I1105 16:02:25.065355 3273 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 16:02:25.065778 kubelet[3273]: E1105 16:02:25.065759 3273 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-172\" not found" Nov 5 16:02:25.066342 kubelet[3273]: I1105 16:02:25.066327 3273 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 16:02:25.077322 kubelet[3273]: I1105 16:02:25.076406 3273 factory.go:221] Registration of the systemd container factory successfully Nov 5 16:02:25.077322 kubelet[3273]: I1105 16:02:25.076565 3273 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 16:02:25.079927 kubelet[3273]: I1105 16:02:25.079153 3273 reconciler.go:26] "Reconciler: start to sync state" Nov 5 16:02:25.080322 kubelet[3273]: E1105 16:02:25.079971 3273 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 16:02:25.083227 kubelet[3273]: I1105 16:02:25.083202 3273 factory.go:221] Registration of the containerd container factory successfully Nov 5 16:02:25.091480 kubelet[3273]: I1105 16:02:25.091328 3273 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 5 16:02:25.093665 kubelet[3273]: I1105 16:02:25.093015 3273 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 5 16:02:25.093822 kubelet[3273]: I1105 16:02:25.093811 3273 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 5 16:02:25.093916 kubelet[3273]: I1105 16:02:25.093907 3273 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 16:02:25.093959 kubelet[3273]: I1105 16:02:25.093954 3273 kubelet.go:2382] "Starting kubelet main sync loop" Nov 5 16:02:25.094110 kubelet[3273]: E1105 16:02:25.094088 3273 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 16:02:25.176796 kubelet[3273]: I1105 16:02:25.176585 3273 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 16:02:25.176796 kubelet[3273]: I1105 16:02:25.176608 3273 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 16:02:25.176796 kubelet[3273]: I1105 16:02:25.176640 3273 state_mem.go:36] "Initialized new in-memory state store" Nov 5 16:02:25.179139 kubelet[3273]: I1105 16:02:25.177428 3273 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 16:02:25.179139 kubelet[3273]: I1105 16:02:25.177456 3273 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 16:02:25.179139 kubelet[3273]: I1105 16:02:25.177488 3273 policy_none.go:49] "None policy: Start" Nov 5 16:02:25.179139 kubelet[3273]: I1105 16:02:25.177502 3273 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 16:02:25.179139 kubelet[3273]: I1105 16:02:25.177517 3273 state_mem.go:35] "Initializing new in-memory state store" Nov 5 16:02:25.179139 kubelet[3273]: I1105 16:02:25.177963 3273 state_mem.go:75] "Updated machine memory state" Nov 5 16:02:25.189925 kubelet[3273]: I1105 16:02:25.189761 3273 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 5 16:02:25.190136 kubelet[3273]: I1105 16:02:25.189953 3273 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 16:02:25.190136 kubelet[3273]: I1105 16:02:25.189966 3273 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 16:02:25.192517 kubelet[3273]: I1105 16:02:25.192428 3273 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 16:02:25.199337 kubelet[3273]: I1105 16:02:25.199272 3273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-172" Nov 5 16:02:25.200986 kubelet[3273]: E1105 16:02:25.200733 3273 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 16:02:25.202544 kubelet[3273]: I1105 16:02:25.202477 3273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-172" Nov 5 16:02:25.203057 kubelet[3273]: I1105 16:02:25.202906 3273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-172" Nov 5 16:02:25.220342 kubelet[3273]: E1105 16:02:25.220039 3273 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-172\" already exists" pod="kube-system/kube-scheduler-ip-172-31-17-172" Nov 5 16:02:25.283055 kubelet[3273]: I1105 16:02:25.282981 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b5431268ab7ded5a3298878b5483a92a-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-172\" (UID: \"b5431268ab7ded5a3298878b5483a92a\") " pod="kube-system/kube-scheduler-ip-172-31-17-172" Nov 5 16:02:25.283197 kubelet[3273]: I1105 16:02:25.283170 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/93f29b18ba94e8fe3a773da370904ece-ca-certs\") pod \"kube-apiserver-ip-172-31-17-172\" (UID: \"93f29b18ba94e8fe3a773da370904ece\") " pod="kube-system/kube-apiserver-ip-172-31-17-172" Nov 5 16:02:25.283253 kubelet[3273]: I1105 16:02:25.283230 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/93f29b18ba94e8fe3a773da370904ece-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-172\" (UID: \"93f29b18ba94e8fe3a773da370904ece\") " pod="kube-system/kube-apiserver-ip-172-31-17-172" Nov 5 16:02:25.283299 kubelet[3273]: I1105 16:02:25.283258 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/93f29b18ba94e8fe3a773da370904ece-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-172\" (UID: \"93f29b18ba94e8fe3a773da370904ece\") " pod="kube-system/kube-apiserver-ip-172-31-17-172" Nov 5 16:02:25.283355 kubelet[3273]: I1105 16:02:25.283311 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/05c8c459f40e285ebb0875bab5fb5676-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-172\" (UID: \"05c8c459f40e285ebb0875bab5fb5676\") " pod="kube-system/kube-controller-manager-ip-172-31-17-172" Nov 5 16:02:25.284050 kubelet[3273]: I1105 16:02:25.283335 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05c8c459f40e285ebb0875bab5fb5676-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-172\" (UID: \"05c8c459f40e285ebb0875bab5fb5676\") " pod="kube-system/kube-controller-manager-ip-172-31-17-172" Nov 5 16:02:25.284050 kubelet[3273]: I1105 16:02:25.283877 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05c8c459f40e285ebb0875bab5fb5676-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-172\" (UID: \"05c8c459f40e285ebb0875bab5fb5676\") " pod="kube-system/kube-controller-manager-ip-172-31-17-172" Nov 5 16:02:25.284050 kubelet[3273]: I1105 16:02:25.283936 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05c8c459f40e285ebb0875bab5fb5676-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-172\" (UID: \"05c8c459f40e285ebb0875bab5fb5676\") " pod="kube-system/kube-controller-manager-ip-172-31-17-172" Nov 5 16:02:25.284050 kubelet[3273]: I1105 16:02:25.283965 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05c8c459f40e285ebb0875bab5fb5676-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-172\" (UID: \"05c8c459f40e285ebb0875bab5fb5676\") " pod="kube-system/kube-controller-manager-ip-172-31-17-172" Nov 5 16:02:25.321446 kubelet[3273]: I1105 16:02:25.321410 3273 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-172" Nov 5 16:02:25.364631 kubelet[3273]: I1105 16:02:25.363163 3273 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-17-172" Nov 5 16:02:25.364631 kubelet[3273]: I1105 16:02:25.363258 3273 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-172" Nov 5 16:02:25.982424 kubelet[3273]: I1105 16:02:25.982392 3273 apiserver.go:52] "Watching apiserver" Nov 5 16:02:26.065539 kubelet[3273]: I1105 16:02:26.065484 3273 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 16:02:26.138562 kubelet[3273]: I1105 16:02:26.138526 3273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-172" Nov 5 16:02:26.139814 kubelet[3273]: I1105 16:02:26.139447 3273 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-172" Nov 5 16:02:26.159211 kubelet[3273]: E1105 16:02:26.159160 3273 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-172\" already exists" pod="kube-system/kube-scheduler-ip-172-31-17-172" Nov 5 16:02:26.159872 kubelet[3273]: E1105 16:02:26.159834 3273 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-172\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-172" Nov 5 16:02:26.178727 kubelet[3273]: I1105 16:02:26.178543 3273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-172" podStartSLOduration=1.178521113 podStartE2EDuration="1.178521113s" podCreationTimestamp="2025-11-05 16:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:02:26.175716971 +0000 UTC m=+1.304341252" watchObservedRunningTime="2025-11-05 16:02:26.178521113 +0000 UTC m=+1.307145390" Nov 5 16:02:26.213159 kubelet[3273]: I1105 16:02:26.212185 3273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-172" podStartSLOduration=1.21216614 podStartE2EDuration="1.21216614s" podCreationTimestamp="2025-11-05 16:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:02:26.211982623 +0000 UTC m=+1.340606902" watchObservedRunningTime="2025-11-05 16:02:26.21216614 +0000 UTC m=+1.340790418" Nov 5 16:02:28.892625 kubelet[3273]: I1105 16:02:28.892479 3273 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 16:02:28.893150 containerd[1979]: time="2025-11-05T16:02:28.892994579Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 16:02:28.895629 kubelet[3273]: I1105 16:02:28.893605 3273 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 16:02:29.533708 systemd[1]: Created slice kubepods-besteffort-pod6009c18e_5d03_457b_9bb5_a54bdb8fde6e.slice - libcontainer container kubepods-besteffort-pod6009c18e_5d03_457b_9bb5_a54bdb8fde6e.slice. Nov 5 16:02:29.616401 kubelet[3273]: I1105 16:02:29.616366 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl8px\" (UniqueName: \"kubernetes.io/projected/6009c18e-5d03-457b-9bb5-a54bdb8fde6e-kube-api-access-kl8px\") pod \"kube-proxy-5fvfp\" (UID: \"6009c18e-5d03-457b-9bb5-a54bdb8fde6e\") " pod="kube-system/kube-proxy-5fvfp" Nov 5 16:02:29.616582 kubelet[3273]: I1105 16:02:29.616411 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6009c18e-5d03-457b-9bb5-a54bdb8fde6e-kube-proxy\") pod \"kube-proxy-5fvfp\" (UID: \"6009c18e-5d03-457b-9bb5-a54bdb8fde6e\") " pod="kube-system/kube-proxy-5fvfp" Nov 5 16:02:29.616582 kubelet[3273]: I1105 16:02:29.616442 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6009c18e-5d03-457b-9bb5-a54bdb8fde6e-xtables-lock\") pod \"kube-proxy-5fvfp\" (UID: \"6009c18e-5d03-457b-9bb5-a54bdb8fde6e\") " pod="kube-system/kube-proxy-5fvfp" Nov 5 16:02:29.616582 kubelet[3273]: I1105 16:02:29.616476 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6009c18e-5d03-457b-9bb5-a54bdb8fde6e-lib-modules\") pod \"kube-proxy-5fvfp\" (UID: \"6009c18e-5d03-457b-9bb5-a54bdb8fde6e\") " pod="kube-system/kube-proxy-5fvfp" Nov 5 16:02:29.840698 systemd[1]: Created slice kubepods-besteffort-poda991831b_f923_448f_b89c_0cac151ec620.slice - libcontainer container kubepods-besteffort-poda991831b_f923_448f_b89c_0cac151ec620.slice. Nov 5 16:02:29.845464 containerd[1979]: time="2025-11-05T16:02:29.845356055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5fvfp,Uid:6009c18e-5d03-457b-9bb5-a54bdb8fde6e,Namespace:kube-system,Attempt:0,}" Nov 5 16:02:29.882710 containerd[1979]: time="2025-11-05T16:02:29.882394561Z" level=info msg="connecting to shim 60f25607625fcb5d21887a1ca3158d0e78cce516fdcfbcf844298ab4f685ec69" address="unix:///run/containerd/s/6425a1c277af0ab930eb40221333bc8a13f220dd1fb11bffcae324701d2be675" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:02:29.917279 systemd[1]: Started cri-containerd-60f25607625fcb5d21887a1ca3158d0e78cce516fdcfbcf844298ab4f685ec69.scope - libcontainer container 60f25607625fcb5d21887a1ca3158d0e78cce516fdcfbcf844298ab4f685ec69. Nov 5 16:02:29.918997 kubelet[3273]: I1105 16:02:29.918916 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a991831b-f923-448f-b89c-0cac151ec620-var-lib-calico\") pod \"tigera-operator-7dcd859c48-wn8cn\" (UID: \"a991831b-f923-448f-b89c-0cac151ec620\") " pod="tigera-operator/tigera-operator-7dcd859c48-wn8cn" Nov 5 16:02:29.919497 kubelet[3273]: I1105 16:02:29.919063 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbv55\" (UniqueName: \"kubernetes.io/projected/a991831b-f923-448f-b89c-0cac151ec620-kube-api-access-fbv55\") pod \"tigera-operator-7dcd859c48-wn8cn\" (UID: \"a991831b-f923-448f-b89c-0cac151ec620\") " pod="tigera-operator/tigera-operator-7dcd859c48-wn8cn" Nov 5 16:02:29.963714 containerd[1979]: time="2025-11-05T16:02:29.963651215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5fvfp,Uid:6009c18e-5d03-457b-9bb5-a54bdb8fde6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"60f25607625fcb5d21887a1ca3158d0e78cce516fdcfbcf844298ab4f685ec69\"" Nov 5 16:02:29.968313 containerd[1979]: time="2025-11-05T16:02:29.968267231Z" level=info msg="CreateContainer within sandbox \"60f25607625fcb5d21887a1ca3158d0e78cce516fdcfbcf844298ab4f685ec69\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 16:02:29.989852 containerd[1979]: time="2025-11-05T16:02:29.989814350Z" level=info msg="Container 4b4181575b9c38369048afc24c5a99ac09d6c9c5b0acb68c77493854980f90d2: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:02:29.990716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3093116386.mount: Deactivated successfully. Nov 5 16:02:30.012676 containerd[1979]: time="2025-11-05T16:02:30.012429833Z" level=info msg="CreateContainer within sandbox \"60f25607625fcb5d21887a1ca3158d0e78cce516fdcfbcf844298ab4f685ec69\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4b4181575b9c38369048afc24c5a99ac09d6c9c5b0acb68c77493854980f90d2\"" Nov 5 16:02:30.013362 containerd[1979]: time="2025-11-05T16:02:30.013333975Z" level=info msg="StartContainer for \"4b4181575b9c38369048afc24c5a99ac09d6c9c5b0acb68c77493854980f90d2\"" Nov 5 16:02:30.016142 containerd[1979]: time="2025-11-05T16:02:30.016075918Z" level=info msg="connecting to shim 4b4181575b9c38369048afc24c5a99ac09d6c9c5b0acb68c77493854980f90d2" address="unix:///run/containerd/s/6425a1c277af0ab930eb40221333bc8a13f220dd1fb11bffcae324701d2be675" protocol=ttrpc version=3 Nov 5 16:02:30.045266 systemd[1]: Started cri-containerd-4b4181575b9c38369048afc24c5a99ac09d6c9c5b0acb68c77493854980f90d2.scope - libcontainer container 4b4181575b9c38369048afc24c5a99ac09d6c9c5b0acb68c77493854980f90d2. Nov 5 16:02:30.090163 containerd[1979]: time="2025-11-05T16:02:30.090121753Z" level=info msg="StartContainer for \"4b4181575b9c38369048afc24c5a99ac09d6c9c5b0acb68c77493854980f90d2\" returns successfully" Nov 5 16:02:30.145854 containerd[1979]: time="2025-11-05T16:02:30.145810636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-wn8cn,Uid:a991831b-f923-448f-b89c-0cac151ec620,Namespace:tigera-operator,Attempt:0,}" Nov 5 16:02:30.162774 kubelet[3273]: I1105 16:02:30.162432 3273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5fvfp" podStartSLOduration=1.162415069 podStartE2EDuration="1.162415069s" podCreationTimestamp="2025-11-05 16:02:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:02:30.162309715 +0000 UTC m=+5.290933991" watchObservedRunningTime="2025-11-05 16:02:30.162415069 +0000 UTC m=+5.291039346" Nov 5 16:02:30.180611 containerd[1979]: time="2025-11-05T16:02:30.180465001Z" level=info msg="connecting to shim b7df47180fb81d2a1f30d61d25103584d2f014db46990f4d013f38c77a288d97" address="unix:///run/containerd/s/adf57407fa3aaa60ec10fc839f04ff0a95c03275f30cadb0da32b913528910be" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:02:30.213298 systemd[1]: Started cri-containerd-b7df47180fb81d2a1f30d61d25103584d2f014db46990f4d013f38c77a288d97.scope - libcontainer container b7df47180fb81d2a1f30d61d25103584d2f014db46990f4d013f38c77a288d97. Nov 5 16:02:30.272700 containerd[1979]: time="2025-11-05T16:02:30.272636689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-wn8cn,Uid:a991831b-f923-448f-b89c-0cac151ec620,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b7df47180fb81d2a1f30d61d25103584d2f014db46990f4d013f38c77a288d97\"" Nov 5 16:02:30.274826 containerd[1979]: time="2025-11-05T16:02:30.274790660Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 16:02:31.524674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4256183464.mount: Deactivated successfully. Nov 5 16:02:32.428776 containerd[1979]: time="2025-11-05T16:02:32.428728465Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:32.431471 containerd[1979]: time="2025-11-05T16:02:32.431410287Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 5 16:02:32.434978 containerd[1979]: time="2025-11-05T16:02:32.434928325Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:32.438880 containerd[1979]: time="2025-11-05T16:02:32.438810494Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:32.439719 containerd[1979]: time="2025-11-05T16:02:32.439682786Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.164847575s" Nov 5 16:02:32.439934 containerd[1979]: time="2025-11-05T16:02:32.439723911Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 5 16:02:32.443068 containerd[1979]: time="2025-11-05T16:02:32.442967079Z" level=info msg="CreateContainer within sandbox \"b7df47180fb81d2a1f30d61d25103584d2f014db46990f4d013f38c77a288d97\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 16:02:32.459695 containerd[1979]: time="2025-11-05T16:02:32.459647497Z" level=info msg="Container 573f575f1edd1148c3e628f48e2265126b1c817a363f2768150eaa3aa3bfe153: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:02:32.466218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2476943226.mount: Deactivated successfully. Nov 5 16:02:32.471751 containerd[1979]: time="2025-11-05T16:02:32.471691662Z" level=info msg="CreateContainer within sandbox \"b7df47180fb81d2a1f30d61d25103584d2f014db46990f4d013f38c77a288d97\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"573f575f1edd1148c3e628f48e2265126b1c817a363f2768150eaa3aa3bfe153\"" Nov 5 16:02:32.473098 containerd[1979]: time="2025-11-05T16:02:32.472280636Z" level=info msg="StartContainer for \"573f575f1edd1148c3e628f48e2265126b1c817a363f2768150eaa3aa3bfe153\"" Nov 5 16:02:32.503254 containerd[1979]: time="2025-11-05T16:02:32.503176005Z" level=info msg="connecting to shim 573f575f1edd1148c3e628f48e2265126b1c817a363f2768150eaa3aa3bfe153" address="unix:///run/containerd/s/adf57407fa3aaa60ec10fc839f04ff0a95c03275f30cadb0da32b913528910be" protocol=ttrpc version=3 Nov 5 16:02:32.538424 systemd[1]: Started cri-containerd-573f575f1edd1148c3e628f48e2265126b1c817a363f2768150eaa3aa3bfe153.scope - libcontainer container 573f575f1edd1148c3e628f48e2265126b1c817a363f2768150eaa3aa3bfe153. Nov 5 16:02:32.580181 containerd[1979]: time="2025-11-05T16:02:32.580089247Z" level=info msg="StartContainer for \"573f575f1edd1148c3e628f48e2265126b1c817a363f2768150eaa3aa3bfe153\" returns successfully" Nov 5 16:03:10.074909 sudo[2324]: pam_unix(sudo:session): session closed for user root Nov 5 16:03:10.100883 sshd-session[2320]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:10.112885 systemd[1]: sshd@6-172.31.17.172:22-139.178.68.195:60794.service: Deactivated successfully. Nov 5 16:03:10.150779 sshd[2323]: Connection closed by 139.178.68.195 port 60794 Nov 5 16:03:10.113751 systemd-logind[1939]: Session 7 logged out. Waiting for processes to exit. Nov 5 16:03:10.122552 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 16:03:10.122820 systemd[1]: session-7.scope: Consumed 5.487s CPU time, 150.3M memory peak. Nov 5 16:03:10.131394 systemd-logind[1939]: Removed session 7. Nov 5 16:03:17.220203 kubelet[3273]: I1105 16:03:17.220122 3273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-wn8cn" podStartSLOduration=46.052204871 podStartE2EDuration="48.219100182s" podCreationTimestamp="2025-11-05 16:02:29 +0000 UTC" firstStartedPulling="2025-11-05 16:02:30.274303453 +0000 UTC m=+5.402927717" lastFinishedPulling="2025-11-05 16:02:32.441198771 +0000 UTC m=+7.569823028" observedRunningTime="2025-11-05 16:02:33.18619375 +0000 UTC m=+8.314818027" watchObservedRunningTime="2025-11-05 16:03:17.219100182 +0000 UTC m=+52.347724460" Nov 5 16:03:17.238291 systemd[1]: Created slice kubepods-besteffort-pod26ded60d_9f5e_49b6_9767_aded6ac649fe.slice - libcontainer container kubepods-besteffort-pod26ded60d_9f5e_49b6_9767_aded6ac649fe.slice. Nov 5 16:03:17.246306 kubelet[3273]: I1105 16:03:17.246266 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcqp7\" (UniqueName: \"kubernetes.io/projected/26ded60d-9f5e-49b6-9767-aded6ac649fe-kube-api-access-pcqp7\") pod \"calico-typha-5b79f96578-fjnbm\" (UID: \"26ded60d-9f5e-49b6-9767-aded6ac649fe\") " pod="calico-system/calico-typha-5b79f96578-fjnbm" Nov 5 16:03:17.246486 kubelet[3273]: I1105 16:03:17.246319 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26ded60d-9f5e-49b6-9767-aded6ac649fe-tigera-ca-bundle\") pod \"calico-typha-5b79f96578-fjnbm\" (UID: \"26ded60d-9f5e-49b6-9767-aded6ac649fe\") " pod="calico-system/calico-typha-5b79f96578-fjnbm" Nov 5 16:03:17.246486 kubelet[3273]: I1105 16:03:17.246347 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/26ded60d-9f5e-49b6-9767-aded6ac649fe-typha-certs\") pod \"calico-typha-5b79f96578-fjnbm\" (UID: \"26ded60d-9f5e-49b6-9767-aded6ac649fe\") " pod="calico-system/calico-typha-5b79f96578-fjnbm" Nov 5 16:03:17.396739 systemd[1]: Created slice kubepods-besteffort-pod0c6f28d8_8f31_443f_a105_2eb5d1624d2b.slice - libcontainer container kubepods-besteffort-pod0c6f28d8_8f31_443f_a105_2eb5d1624d2b.slice. Nov 5 16:03:17.449243 kubelet[3273]: I1105 16:03:17.449199 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0c6f28d8-8f31-443f-a105-2eb5d1624d2b-cni-bin-dir\") pod \"calico-node-k56kf\" (UID: \"0c6f28d8-8f31-443f-a105-2eb5d1624d2b\") " pod="calico-system/calico-node-k56kf" Nov 5 16:03:17.449426 kubelet[3273]: I1105 16:03:17.449255 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0c6f28d8-8f31-443f-a105-2eb5d1624d2b-cni-net-dir\") pod \"calico-node-k56kf\" (UID: \"0c6f28d8-8f31-443f-a105-2eb5d1624d2b\") " pod="calico-system/calico-node-k56kf" Nov 5 16:03:17.449426 kubelet[3273]: I1105 16:03:17.449281 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0c6f28d8-8f31-443f-a105-2eb5d1624d2b-flexvol-driver-host\") pod \"calico-node-k56kf\" (UID: \"0c6f28d8-8f31-443f-a105-2eb5d1624d2b\") " pod="calico-system/calico-node-k56kf" Nov 5 16:03:17.449426 kubelet[3273]: I1105 16:03:17.449306 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c6f28d8-8f31-443f-a105-2eb5d1624d2b-xtables-lock\") pod \"calico-node-k56kf\" (UID: \"0c6f28d8-8f31-443f-a105-2eb5d1624d2b\") " pod="calico-system/calico-node-k56kf" Nov 5 16:03:17.449426 kubelet[3273]: I1105 16:03:17.449337 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0c6f28d8-8f31-443f-a105-2eb5d1624d2b-var-lib-calico\") pod \"calico-node-k56kf\" (UID: \"0c6f28d8-8f31-443f-a105-2eb5d1624d2b\") " pod="calico-system/calico-node-k56kf" Nov 5 16:03:17.449426 kubelet[3273]: I1105 16:03:17.449361 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0c6f28d8-8f31-443f-a105-2eb5d1624d2b-policysync\") pod \"calico-node-k56kf\" (UID: \"0c6f28d8-8f31-443f-a105-2eb5d1624d2b\") " pod="calico-system/calico-node-k56kf" Nov 5 16:03:17.449634 kubelet[3273]: I1105 16:03:17.449381 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c6f28d8-8f31-443f-a105-2eb5d1624d2b-tigera-ca-bundle\") pod \"calico-node-k56kf\" (UID: \"0c6f28d8-8f31-443f-a105-2eb5d1624d2b\") " pod="calico-system/calico-node-k56kf" Nov 5 16:03:17.449634 kubelet[3273]: I1105 16:03:17.449405 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c6f28d8-8f31-443f-a105-2eb5d1624d2b-lib-modules\") pod \"calico-node-k56kf\" (UID: \"0c6f28d8-8f31-443f-a105-2eb5d1624d2b\") " pod="calico-system/calico-node-k56kf" Nov 5 16:03:17.449634 kubelet[3273]: I1105 16:03:17.449426 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0c6f28d8-8f31-443f-a105-2eb5d1624d2b-node-certs\") pod \"calico-node-k56kf\" (UID: \"0c6f28d8-8f31-443f-a105-2eb5d1624d2b\") " pod="calico-system/calico-node-k56kf" Nov 5 16:03:17.449634 kubelet[3273]: I1105 16:03:17.449448 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4zhz\" (UniqueName: \"kubernetes.io/projected/0c6f28d8-8f31-443f-a105-2eb5d1624d2b-kube-api-access-d4zhz\") pod \"calico-node-k56kf\" (UID: \"0c6f28d8-8f31-443f-a105-2eb5d1624d2b\") " pod="calico-system/calico-node-k56kf" Nov 5 16:03:17.449634 kubelet[3273]: I1105 16:03:17.449474 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0c6f28d8-8f31-443f-a105-2eb5d1624d2b-cni-log-dir\") pod \"calico-node-k56kf\" (UID: \"0c6f28d8-8f31-443f-a105-2eb5d1624d2b\") " pod="calico-system/calico-node-k56kf" Nov 5 16:03:17.450004 kubelet[3273]: I1105 16:03:17.449510 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0c6f28d8-8f31-443f-a105-2eb5d1624d2b-var-run-calico\") pod \"calico-node-k56kf\" (UID: \"0c6f28d8-8f31-443f-a105-2eb5d1624d2b\") " pod="calico-system/calico-node-k56kf" Nov 5 16:03:17.476799 kubelet[3273]: E1105 16:03:17.476174 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dsvvp" podUID="6e11cbb7-6c81-460e-9d02-0e852cdd8f6c" Nov 5 16:03:17.550848 kubelet[3273]: I1105 16:03:17.550798 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e11cbb7-6c81-460e-9d02-0e852cdd8f6c-registration-dir\") pod \"csi-node-driver-dsvvp\" (UID: \"6e11cbb7-6c81-460e-9d02-0e852cdd8f6c\") " pod="calico-system/csi-node-driver-dsvvp" Nov 5 16:03:17.550848 kubelet[3273]: I1105 16:03:17.550846 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6e11cbb7-6c81-460e-9d02-0e852cdd8f6c-varrun\") pod \"csi-node-driver-dsvvp\" (UID: \"6e11cbb7-6c81-460e-9d02-0e852cdd8f6c\") " pod="calico-system/csi-node-driver-dsvvp" Nov 5 16:03:17.551082 kubelet[3273]: I1105 16:03:17.550878 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z48dn\" (UniqueName: \"kubernetes.io/projected/6e11cbb7-6c81-460e-9d02-0e852cdd8f6c-kube-api-access-z48dn\") pod \"csi-node-driver-dsvvp\" (UID: \"6e11cbb7-6c81-460e-9d02-0e852cdd8f6c\") " pod="calico-system/csi-node-driver-dsvvp" Nov 5 16:03:17.551082 kubelet[3273]: I1105 16:03:17.550926 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e11cbb7-6c81-460e-9d02-0e852cdd8f6c-socket-dir\") pod \"csi-node-driver-dsvvp\" (UID: \"6e11cbb7-6c81-460e-9d02-0e852cdd8f6c\") " pod="calico-system/csi-node-driver-dsvvp" Nov 5 16:03:17.551082 kubelet[3273]: I1105 16:03:17.550947 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6e11cbb7-6c81-460e-9d02-0e852cdd8f6c-kubelet-dir\") pod \"csi-node-driver-dsvvp\" (UID: \"6e11cbb7-6c81-460e-9d02-0e852cdd8f6c\") " pod="calico-system/csi-node-driver-dsvvp" Nov 5 16:03:17.553933 kubelet[3273]: E1105 16:03:17.553881 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.553933 kubelet[3273]: W1105 16:03:17.553907 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.556076 kubelet[3273]: E1105 16:03:17.555730 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.556076 kubelet[3273]: E1105 16:03:17.556047 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.556076 kubelet[3273]: W1105 16:03:17.556064 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.556076 kubelet[3273]: E1105 16:03:17.556084 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.557155 kubelet[3273]: E1105 16:03:17.557136 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.558224 kubelet[3273]: W1105 16:03:17.557158 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.558224 kubelet[3273]: E1105 16:03:17.557178 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.558224 kubelet[3273]: E1105 16:03:17.557448 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.558224 kubelet[3273]: W1105 16:03:17.557462 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.558224 kubelet[3273]: E1105 16:03:17.557480 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.562562 kubelet[3273]: E1105 16:03:17.562533 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.562562 kubelet[3273]: W1105 16:03:17.562560 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.562712 kubelet[3273]: E1105 16:03:17.562583 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.575410 containerd[1979]: time="2025-11-05T16:03:17.575354441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b79f96578-fjnbm,Uid:26ded60d-9f5e-49b6-9767-aded6ac649fe,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:17.587416 kubelet[3273]: E1105 16:03:17.584699 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.587416 kubelet[3273]: W1105 16:03:17.584728 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.587416 kubelet[3273]: E1105 16:03:17.584757 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.632587 containerd[1979]: time="2025-11-05T16:03:17.632284217Z" level=info msg="connecting to shim ca527b79d5c1d0a4ea8bdfeddb82af60685b9f577bc5358ba831d8698952fb7a" address="unix:///run/containerd/s/25d5d6ea704ec54424a4d511b464321bd73d03ee847bdca83a89179991ad2852" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:17.651944 kubelet[3273]: E1105 16:03:17.651909 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.651944 kubelet[3273]: W1105 16:03:17.651938 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.652792 kubelet[3273]: E1105 16:03:17.651964 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.652792 kubelet[3273]: E1105 16:03:17.652310 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.652792 kubelet[3273]: W1105 16:03:17.652323 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.652792 kubelet[3273]: E1105 16:03:17.652339 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.652792 kubelet[3273]: E1105 16:03:17.652707 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.652792 kubelet[3273]: W1105 16:03:17.652720 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.652792 kubelet[3273]: E1105 16:03:17.652751 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.653651 kubelet[3273]: E1105 16:03:17.652983 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.653651 kubelet[3273]: W1105 16:03:17.652994 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.653651 kubelet[3273]: E1105 16:03:17.653015 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.653651 kubelet[3273]: E1105 16:03:17.653334 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.653651 kubelet[3273]: W1105 16:03:17.653345 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.653651 kubelet[3273]: E1105 16:03:17.653363 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.653651 kubelet[3273]: E1105 16:03:17.653590 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.653651 kubelet[3273]: W1105 16:03:17.653602 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.655202 kubelet[3273]: E1105 16:03:17.653791 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.655202 kubelet[3273]: W1105 16:03:17.653801 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.655202 kubelet[3273]: E1105 16:03:17.653956 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.655202 kubelet[3273]: W1105 16:03:17.653964 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.655202 kubelet[3273]: E1105 16:03:17.654063 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.655202 kubelet[3273]: E1105 16:03:17.654097 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.655202 kubelet[3273]: E1105 16:03:17.654113 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.655202 kubelet[3273]: E1105 16:03:17.654155 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.655202 kubelet[3273]: W1105 16:03:17.654164 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.655202 kubelet[3273]: E1105 16:03:17.654190 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.655616 kubelet[3273]: E1105 16:03:17.655554 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.655616 kubelet[3273]: W1105 16:03:17.655568 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.655616 kubelet[3273]: E1105 16:03:17.655610 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.658076 kubelet[3273]: E1105 16:03:17.658005 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.658076 kubelet[3273]: W1105 16:03:17.658061 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.658394 kubelet[3273]: E1105 16:03:17.658143 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.658663 kubelet[3273]: E1105 16:03:17.658469 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.658663 kubelet[3273]: W1105 16:03:17.658481 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.658663 kubelet[3273]: E1105 16:03:17.658610 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.658861 kubelet[3273]: E1105 16:03:17.658812 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.658861 kubelet[3273]: W1105 16:03:17.658823 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.659157 kubelet[3273]: E1105 16:03:17.658930 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.659245 kubelet[3273]: E1105 16:03:17.659161 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.659245 kubelet[3273]: W1105 16:03:17.659171 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.659712 kubelet[3273]: E1105 16:03:17.659284 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.659712 kubelet[3273]: E1105 16:03:17.659383 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.659712 kubelet[3273]: W1105 16:03:17.659392 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.659712 kubelet[3273]: E1105 16:03:17.659554 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.659712 kubelet[3273]: W1105 16:03:17.659562 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.659919 kubelet[3273]: E1105 16:03:17.659776 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.659919 kubelet[3273]: W1105 16:03:17.659786 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.659919 kubelet[3273]: E1105 16:03:17.659800 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.660082 kubelet[3273]: E1105 16:03:17.660047 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.660082 kubelet[3273]: W1105 16:03:17.660057 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.660082 kubelet[3273]: E1105 16:03:17.660069 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.660828 kubelet[3273]: E1105 16:03:17.660281 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.660828 kubelet[3273]: W1105 16:03:17.660292 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.660828 kubelet[3273]: E1105 16:03:17.660304 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.660828 kubelet[3273]: E1105 16:03:17.660483 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.660828 kubelet[3273]: W1105 16:03:17.660491 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.660828 kubelet[3273]: E1105 16:03:17.660502 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.660828 kubelet[3273]: E1105 16:03:17.660733 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.661278 kubelet[3273]: E1105 16:03:17.660953 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.661278 kubelet[3273]: W1105 16:03:17.660965 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.661278 kubelet[3273]: E1105 16:03:17.660979 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.661289 systemd[1]: Started cri-containerd-ca527b79d5c1d0a4ea8bdfeddb82af60685b9f577bc5358ba831d8698952fb7a.scope - libcontainer container ca527b79d5c1d0a4ea8bdfeddb82af60685b9f577bc5358ba831d8698952fb7a. Nov 5 16:03:17.662269 kubelet[3273]: E1105 16:03:17.662210 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.662613 kubelet[3273]: E1105 16:03:17.662594 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.662881 kubelet[3273]: W1105 16:03:17.662701 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.662881 kubelet[3273]: E1105 16:03:17.662730 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.663089 kubelet[3273]: E1105 16:03:17.663077 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.663175 kubelet[3273]: W1105 16:03:17.663163 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.663413 kubelet[3273]: E1105 16:03:17.663397 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.665203 kubelet[3273]: E1105 16:03:17.665182 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.665203 kubelet[3273]: W1105 16:03:17.665202 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.665359 kubelet[3273]: E1105 16:03:17.665223 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.666311 kubelet[3273]: E1105 16:03:17.666286 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.666311 kubelet[3273]: W1105 16:03:17.666305 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.666436 kubelet[3273]: E1105 16:03:17.666322 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.678063 kubelet[3273]: E1105 16:03:17.677469 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:17.678063 kubelet[3273]: W1105 16:03:17.677500 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:17.678063 kubelet[3273]: E1105 16:03:17.677527 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:17.703325 containerd[1979]: time="2025-11-05T16:03:17.703259947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k56kf,Uid:0c6f28d8-8f31-443f-a105-2eb5d1624d2b,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:17.733915 containerd[1979]: time="2025-11-05T16:03:17.732881653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b79f96578-fjnbm,Uid:26ded60d-9f5e-49b6-9767-aded6ac649fe,Namespace:calico-system,Attempt:0,} returns sandbox id \"ca527b79d5c1d0a4ea8bdfeddb82af60685b9f577bc5358ba831d8698952fb7a\"" Nov 5 16:03:17.744501 containerd[1979]: time="2025-11-05T16:03:17.744083112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 16:03:17.765322 containerd[1979]: time="2025-11-05T16:03:17.763392766Z" level=info msg="connecting to shim 5caedc7820b2a219181cd040b9908b6713e22345f136613d84275ce9ffc0a274" address="unix:///run/containerd/s/fd23c18b48c156e06fc85899465c07bd015ad1e0b0849c2ca820633d899e5f3b" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:17.820272 systemd[1]: Started cri-containerd-5caedc7820b2a219181cd040b9908b6713e22345f136613d84275ce9ffc0a274.scope - libcontainer container 5caedc7820b2a219181cd040b9908b6713e22345f136613d84275ce9ffc0a274. Nov 5 16:03:17.865729 containerd[1979]: time="2025-11-05T16:03:17.865685612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k56kf,Uid:0c6f28d8-8f31-443f-a105-2eb5d1624d2b,Namespace:calico-system,Attempt:0,} returns sandbox id \"5caedc7820b2a219181cd040b9908b6713e22345f136613d84275ce9ffc0a274\"" Nov 5 16:03:19.095667 kubelet[3273]: E1105 16:03:19.094879 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dsvvp" podUID="6e11cbb7-6c81-460e-9d02-0e852cdd8f6c" Nov 5 16:03:19.355953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2913050232.mount: Deactivated successfully. Nov 5 16:03:20.753584 containerd[1979]: time="2025-11-05T16:03:20.753515320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:20.754938 containerd[1979]: time="2025-11-05T16:03:20.754787687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 5 16:03:20.757082 containerd[1979]: time="2025-11-05T16:03:20.757017814Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:20.759670 containerd[1979]: time="2025-11-05T16:03:20.759640049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:20.760810 containerd[1979]: time="2025-11-05T16:03:20.760148678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.016017506s" Nov 5 16:03:20.760810 containerd[1979]: time="2025-11-05T16:03:20.760183674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 5 16:03:20.762563 containerd[1979]: time="2025-11-05T16:03:20.762231478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 16:03:20.789443 containerd[1979]: time="2025-11-05T16:03:20.789391554Z" level=info msg="CreateContainer within sandbox \"ca527b79d5c1d0a4ea8bdfeddb82af60685b9f577bc5358ba831d8698952fb7a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 16:03:20.802118 containerd[1979]: time="2025-11-05T16:03:20.799014052Z" level=info msg="Container 7705ff6b8e84b35dcb901f06e46e408e18d8a17fc0d39c946c1b51528d60a5bb: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:03:20.826787 containerd[1979]: time="2025-11-05T16:03:20.826740945Z" level=info msg="CreateContainer within sandbox \"ca527b79d5c1d0a4ea8bdfeddb82af60685b9f577bc5358ba831d8698952fb7a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7705ff6b8e84b35dcb901f06e46e408e18d8a17fc0d39c946c1b51528d60a5bb\"" Nov 5 16:03:20.827919 containerd[1979]: time="2025-11-05T16:03:20.827844283Z" level=info msg="StartContainer for \"7705ff6b8e84b35dcb901f06e46e408e18d8a17fc0d39c946c1b51528d60a5bb\"" Nov 5 16:03:20.830105 containerd[1979]: time="2025-11-05T16:03:20.830061947Z" level=info msg="connecting to shim 7705ff6b8e84b35dcb901f06e46e408e18d8a17fc0d39c946c1b51528d60a5bb" address="unix:///run/containerd/s/25d5d6ea704ec54424a4d511b464321bd73d03ee847bdca83a89179991ad2852" protocol=ttrpc version=3 Nov 5 16:03:20.899350 systemd[1]: Started cri-containerd-7705ff6b8e84b35dcb901f06e46e408e18d8a17fc0d39c946c1b51528d60a5bb.scope - libcontainer container 7705ff6b8e84b35dcb901f06e46e408e18d8a17fc0d39c946c1b51528d60a5bb. Nov 5 16:03:20.983046 containerd[1979]: time="2025-11-05T16:03:20.981646794Z" level=info msg="StartContainer for \"7705ff6b8e84b35dcb901f06e46e408e18d8a17fc0d39c946c1b51528d60a5bb\" returns successfully" Nov 5 16:03:21.099242 kubelet[3273]: E1105 16:03:21.099082 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dsvvp" podUID="6e11cbb7-6c81-460e-9d02-0e852cdd8f6c" Nov 5 16:03:21.462184 kubelet[3273]: E1105 16:03:21.462148 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.462184 kubelet[3273]: W1105 16:03:21.462180 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.462421 kubelet[3273]: E1105 16:03:21.462216 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.462732 kubelet[3273]: E1105 16:03:21.462474 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.462732 kubelet[3273]: W1105 16:03:21.462504 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.462732 kubelet[3273]: E1105 16:03:21.462525 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.462911 kubelet[3273]: E1105 16:03:21.462774 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.462911 kubelet[3273]: W1105 16:03:21.462787 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.462911 kubelet[3273]: E1105 16:03:21.462816 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.463373 kubelet[3273]: E1105 16:03:21.463135 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.463373 kubelet[3273]: W1105 16:03:21.463149 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.463373 kubelet[3273]: E1105 16:03:21.463161 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.464016 kubelet[3273]: E1105 16:03:21.464003 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.464255 kubelet[3273]: W1105 16:03:21.464127 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.464255 kubelet[3273]: E1105 16:03:21.464149 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.464616 kubelet[3273]: E1105 16:03:21.464449 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.464616 kubelet[3273]: W1105 16:03:21.464462 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.464616 kubelet[3273]: E1105 16:03:21.464477 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.465241 kubelet[3273]: E1105 16:03:21.465225 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.465341 kubelet[3273]: W1105 16:03:21.465329 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.465418 kubelet[3273]: E1105 16:03:21.465407 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.465800 kubelet[3273]: E1105 16:03:21.465674 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.466040 kubelet[3273]: W1105 16:03:21.465889 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.466040 kubelet[3273]: E1105 16:03:21.465913 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.466782 kubelet[3273]: E1105 16:03:21.466679 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.466782 kubelet[3273]: W1105 16:03:21.466693 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.466782 kubelet[3273]: E1105 16:03:21.466707 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.467784 kubelet[3273]: E1105 16:03:21.467623 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.467784 kubelet[3273]: W1105 16:03:21.467638 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.468902 kubelet[3273]: E1105 16:03:21.467652 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.469091 kubelet[3273]: E1105 16:03:21.469077 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.469450 kubelet[3273]: W1105 16:03:21.469296 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.469450 kubelet[3273]: E1105 16:03:21.469318 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.472045 kubelet[3273]: E1105 16:03:21.470712 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.472045 kubelet[3273]: W1105 16:03:21.470726 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.472045 kubelet[3273]: E1105 16:03:21.470738 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.472597 kubelet[3273]: E1105 16:03:21.472582 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.472694 kubelet[3273]: W1105 16:03:21.472681 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.472772 kubelet[3273]: E1105 16:03:21.472760 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.474196 kubelet[3273]: E1105 16:03:21.474166 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.474384 kubelet[3273]: W1105 16:03:21.474312 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.474384 kubelet[3273]: E1105 16:03:21.474333 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.474706 kubelet[3273]: E1105 16:03:21.474662 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.474706 kubelet[3273]: W1105 16:03:21.474675 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.474706 kubelet[3273]: E1105 16:03:21.474688 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.492351 kubelet[3273]: E1105 16:03:21.491940 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.492351 kubelet[3273]: W1105 16:03:21.491966 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.492351 kubelet[3273]: E1105 16:03:21.491991 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.493199 kubelet[3273]: E1105 16:03:21.493167 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.493537 kubelet[3273]: W1105 16:03:21.493334 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.493537 kubelet[3273]: E1105 16:03:21.493370 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.495057 kubelet[3273]: E1105 16:03:21.494165 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.495256 kubelet[3273]: W1105 16:03:21.495165 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.495256 kubelet[3273]: E1105 16:03:21.495208 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.495739 kubelet[3273]: E1105 16:03:21.495707 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.495739 kubelet[3273]: W1105 16:03:21.495721 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.496034 kubelet[3273]: E1105 16:03:21.495925 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.496202 kubelet[3273]: E1105 16:03:21.496172 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.496202 kubelet[3273]: W1105 16:03:21.496186 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.496484 kubelet[3273]: E1105 16:03:21.496387 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.496846 kubelet[3273]: E1105 16:03:21.496832 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.496987 kubelet[3273]: W1105 16:03:21.496921 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.497208 kubelet[3273]: E1105 16:03:21.497111 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.497421 kubelet[3273]: E1105 16:03:21.497395 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.497421 kubelet[3273]: W1105 16:03:21.497407 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.497629 kubelet[3273]: E1105 16:03:21.497616 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.498095 kubelet[3273]: E1105 16:03:21.498062 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.498095 kubelet[3273]: W1105 16:03:21.498077 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.498377 kubelet[3273]: E1105 16:03:21.498324 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.498506 kubelet[3273]: E1105 16:03:21.498495 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.498578 kubelet[3273]: W1105 16:03:21.498568 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.500094 kubelet[3273]: E1105 16:03:21.500047 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.500418 kubelet[3273]: E1105 16:03:21.500385 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.500418 kubelet[3273]: W1105 16:03:21.500400 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.500844 kubelet[3273]: E1105 16:03:21.500549 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.501014 kubelet[3273]: E1105 16:03:21.501002 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.501136 kubelet[3273]: W1105 16:03:21.501122 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.501290 kubelet[3273]: E1105 16:03:21.501275 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.501579 kubelet[3273]: E1105 16:03:21.501549 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.501579 kubelet[3273]: W1105 16:03:21.501562 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.501788 kubelet[3273]: E1105 16:03:21.501762 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.503438 kubelet[3273]: E1105 16:03:21.502068 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.503438 kubelet[3273]: W1105 16:03:21.502115 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.503700 kubelet[3273]: E1105 16:03:21.503612 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.504240 kubelet[3273]: E1105 16:03:21.503959 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.504240 kubelet[3273]: W1105 16:03:21.503972 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.504240 kubelet[3273]: E1105 16:03:21.503995 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.504559 kubelet[3273]: E1105 16:03:21.504528 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.504559 kubelet[3273]: W1105 16:03:21.504543 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.504747 kubelet[3273]: E1105 16:03:21.504700 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.505382 kubelet[3273]: E1105 16:03:21.505051 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.505382 kubelet[3273]: W1105 16:03:21.505065 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.505382 kubelet[3273]: E1105 16:03:21.505080 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.506107 kubelet[3273]: E1105 16:03:21.506092 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.506191 kubelet[3273]: W1105 16:03:21.506181 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.506299 kubelet[3273]: E1105 16:03:21.506285 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:21.506603 kubelet[3273]: E1105 16:03:21.506591 3273 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:21.506686 kubelet[3273]: W1105 16:03:21.506676 3273 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:21.506794 kubelet[3273]: E1105 16:03:21.506783 3273 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:22.049948 containerd[1979]: time="2025-11-05T16:03:22.049896580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:22.050938 containerd[1979]: time="2025-11-05T16:03:22.050798181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 5 16:03:22.051880 containerd[1979]: time="2025-11-05T16:03:22.051845639Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:22.055604 containerd[1979]: time="2025-11-05T16:03:22.055267916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:22.055778 containerd[1979]: time="2025-11-05T16:03:22.055755896Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.293466467s" Nov 5 16:03:22.055848 containerd[1979]: time="2025-11-05T16:03:22.055835724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 5 16:03:22.059511 containerd[1979]: time="2025-11-05T16:03:22.059475807Z" level=info msg="CreateContainer within sandbox \"5caedc7820b2a219181cd040b9908b6713e22345f136613d84275ce9ffc0a274\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 16:03:22.073239 containerd[1979]: time="2025-11-05T16:03:22.073202039Z" level=info msg="Container 06e49b194e7e52a1ad8f366564727b139c56eff6f7bb3c632eeab2b31e013cb8: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:03:22.089254 containerd[1979]: time="2025-11-05T16:03:22.089205783Z" level=info msg="CreateContainer within sandbox \"5caedc7820b2a219181cd040b9908b6713e22345f136613d84275ce9ffc0a274\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"06e49b194e7e52a1ad8f366564727b139c56eff6f7bb3c632eeab2b31e013cb8\"" Nov 5 16:03:22.091059 containerd[1979]: time="2025-11-05T16:03:22.090190678Z" level=info msg="StartContainer for \"06e49b194e7e52a1ad8f366564727b139c56eff6f7bb3c632eeab2b31e013cb8\"" Nov 5 16:03:22.092228 containerd[1979]: time="2025-11-05T16:03:22.092189821Z" level=info msg="connecting to shim 06e49b194e7e52a1ad8f366564727b139c56eff6f7bb3c632eeab2b31e013cb8" address="unix:///run/containerd/s/fd23c18b48c156e06fc85899465c07bd015ad1e0b0849c2ca820633d899e5f3b" protocol=ttrpc version=3 Nov 5 16:03:22.137275 systemd[1]: Started cri-containerd-06e49b194e7e52a1ad8f366564727b139c56eff6f7bb3c632eeab2b31e013cb8.scope - libcontainer container 06e49b194e7e52a1ad8f366564727b139c56eff6f7bb3c632eeab2b31e013cb8. Nov 5 16:03:22.184236 containerd[1979]: time="2025-11-05T16:03:22.184176240Z" level=info msg="StartContainer for \"06e49b194e7e52a1ad8f366564727b139c56eff6f7bb3c632eeab2b31e013cb8\" returns successfully" Nov 5 16:03:22.196693 systemd[1]: cri-containerd-06e49b194e7e52a1ad8f366564727b139c56eff6f7bb3c632eeab2b31e013cb8.scope: Deactivated successfully. Nov 5 16:03:22.232734 containerd[1979]: time="2025-11-05T16:03:22.232372244Z" level=info msg="received exit event container_id:\"06e49b194e7e52a1ad8f366564727b139c56eff6f7bb3c632eeab2b31e013cb8\" id:\"06e49b194e7e52a1ad8f366564727b139c56eff6f7bb3c632eeab2b31e013cb8\" pid:4096 exited_at:{seconds:1762358602 nanos:203382012}" Nov 5 16:03:22.247383 containerd[1979]: time="2025-11-05T16:03:22.247309257Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06e49b194e7e52a1ad8f366564727b139c56eff6f7bb3c632eeab2b31e013cb8\" id:\"06e49b194e7e52a1ad8f366564727b139c56eff6f7bb3c632eeab2b31e013cb8\" pid:4096 exited_at:{seconds:1762358602 nanos:203382012}" Nov 5 16:03:22.270544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06e49b194e7e52a1ad8f366564727b139c56eff6f7bb3c632eeab2b31e013cb8-rootfs.mount: Deactivated successfully. Nov 5 16:03:22.388184 containerd[1979]: time="2025-11-05T16:03:22.388120893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 16:03:22.424224 kubelet[3273]: I1105 16:03:22.424165 3273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b79f96578-fjnbm" podStartSLOduration=2.40545971 podStartE2EDuration="5.424147879s" podCreationTimestamp="2025-11-05 16:03:17 +0000 UTC" firstStartedPulling="2025-11-05 16:03:17.743148853 +0000 UTC m=+52.871773121" lastFinishedPulling="2025-11-05 16:03:20.761837032 +0000 UTC m=+55.890461290" observedRunningTime="2025-11-05 16:03:21.445668497 +0000 UTC m=+56.574292774" watchObservedRunningTime="2025-11-05 16:03:22.424147879 +0000 UTC m=+57.552772156" Nov 5 16:03:23.096152 kubelet[3273]: E1105 16:03:23.095148 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dsvvp" podUID="6e11cbb7-6c81-460e-9d02-0e852cdd8f6c" Nov 5 16:03:25.096693 kubelet[3273]: E1105 16:03:25.096387 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dsvvp" podUID="6e11cbb7-6c81-460e-9d02-0e852cdd8f6c" Nov 5 16:03:27.095933 kubelet[3273]: E1105 16:03:27.095878 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dsvvp" podUID="6e11cbb7-6c81-460e-9d02-0e852cdd8f6c" Nov 5 16:03:27.117273 systemd[1]: Started sshd@7-172.31.17.172:22-34.201.44.144:13060.service - OpenSSH per-connection server daemon (34.201.44.144:13060). Nov 5 16:03:28.488148 sshd[4147]: Connection closed by 34.201.44.144 port 13060 [preauth] Nov 5 16:03:28.490769 systemd[1]: sshd@7-172.31.17.172:22-34.201.44.144:13060.service: Deactivated successfully. Nov 5 16:03:28.620378 containerd[1979]: time="2025-11-05T16:03:28.620320031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:28.621624 containerd[1979]: time="2025-11-05T16:03:28.621390917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 5 16:03:28.622852 containerd[1979]: time="2025-11-05T16:03:28.622814745Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:28.625138 containerd[1979]: time="2025-11-05T16:03:28.625109739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:28.625645 containerd[1979]: time="2025-11-05T16:03:28.625622432Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 6.237439822s" Nov 5 16:03:28.625832 containerd[1979]: time="2025-11-05T16:03:28.625815659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 5 16:03:28.639535 containerd[1979]: time="2025-11-05T16:03:28.639472136Z" level=info msg="CreateContainer within sandbox \"5caedc7820b2a219181cd040b9908b6713e22345f136613d84275ce9ffc0a274\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 16:03:28.653651 containerd[1979]: time="2025-11-05T16:03:28.651604335Z" level=info msg="Container 16536bbe8310e9336cdcad9386dd11af16159d6c3cb6ff0a764aa1d407eacea9: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:03:28.666120 containerd[1979]: time="2025-11-05T16:03:28.666014299Z" level=info msg="CreateContainer within sandbox \"5caedc7820b2a219181cd040b9908b6713e22345f136613d84275ce9ffc0a274\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"16536bbe8310e9336cdcad9386dd11af16159d6c3cb6ff0a764aa1d407eacea9\"" Nov 5 16:03:28.668037 containerd[1979]: time="2025-11-05T16:03:28.667974535Z" level=info msg="StartContainer for \"16536bbe8310e9336cdcad9386dd11af16159d6c3cb6ff0a764aa1d407eacea9\"" Nov 5 16:03:28.669769 containerd[1979]: time="2025-11-05T16:03:28.669673553Z" level=info msg="connecting to shim 16536bbe8310e9336cdcad9386dd11af16159d6c3cb6ff0a764aa1d407eacea9" address="unix:///run/containerd/s/fd23c18b48c156e06fc85899465c07bd015ad1e0b0849c2ca820633d899e5f3b" protocol=ttrpc version=3 Nov 5 16:03:28.696250 systemd[1]: Started cri-containerd-16536bbe8310e9336cdcad9386dd11af16159d6c3cb6ff0a764aa1d407eacea9.scope - libcontainer container 16536bbe8310e9336cdcad9386dd11af16159d6c3cb6ff0a764aa1d407eacea9. Nov 5 16:03:28.769065 containerd[1979]: time="2025-11-05T16:03:28.768928175Z" level=info msg="StartContainer for \"16536bbe8310e9336cdcad9386dd11af16159d6c3cb6ff0a764aa1d407eacea9\" returns successfully" Nov 5 16:03:29.096218 kubelet[3273]: E1105 16:03:29.095139 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dsvvp" podUID="6e11cbb7-6c81-460e-9d02-0e852cdd8f6c" Nov 5 16:03:30.039436 systemd[1]: cri-containerd-16536bbe8310e9336cdcad9386dd11af16159d6c3cb6ff0a764aa1d407eacea9.scope: Deactivated successfully. Nov 5 16:03:30.039813 systemd[1]: cri-containerd-16536bbe8310e9336cdcad9386dd11af16159d6c3cb6ff0a764aa1d407eacea9.scope: Consumed 567ms CPU time, 163.6M memory peak, 5.1M read from disk, 171.3M written to disk. Nov 5 16:03:30.065748 containerd[1979]: time="2025-11-05T16:03:30.061367943Z" level=info msg="received exit event container_id:\"16536bbe8310e9336cdcad9386dd11af16159d6c3cb6ff0a764aa1d407eacea9\" id:\"16536bbe8310e9336cdcad9386dd11af16159d6c3cb6ff0a764aa1d407eacea9\" pid:4170 exited_at:{seconds:1762358610 nanos:60370961}" Nov 5 16:03:30.065748 containerd[1979]: time="2025-11-05T16:03:30.061683861Z" level=info msg="TaskExit event in podsandbox handler container_id:\"16536bbe8310e9336cdcad9386dd11af16159d6c3cb6ff0a764aa1d407eacea9\" id:\"16536bbe8310e9336cdcad9386dd11af16159d6c3cb6ff0a764aa1d407eacea9\" pid:4170 exited_at:{seconds:1762358610 nanos:60370961}" Nov 5 16:03:30.105995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16536bbe8310e9336cdcad9386dd11af16159d6c3cb6ff0a764aa1d407eacea9-rootfs.mount: Deactivated successfully. Nov 5 16:03:30.144061 kubelet[3273]: I1105 16:03:30.143128 3273 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 16:03:30.230408 systemd[1]: Created slice kubepods-burstable-pod68251c55_f958_4ce6_8d9b_1ec5531fcb53.slice - libcontainer container kubepods-burstable-pod68251c55_f958_4ce6_8d9b_1ec5531fcb53.slice. Nov 5 16:03:30.256577 systemd[1]: Created slice kubepods-besteffort-pod2651b52f_bebf_4e7b_a8cc_451e0eb22851.slice - libcontainer container kubepods-besteffort-pod2651b52f_bebf_4e7b_a8cc_451e0eb22851.slice. Nov 5 16:03:30.296916 kubelet[3273]: I1105 16:03:30.296268 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk4bn\" (UniqueName: \"kubernetes.io/projected/9aac16aa-0990-4e14-a1db-e5abd9a92505-kube-api-access-fk4bn\") pod \"calico-kube-controllers-74b589d999-5tfgh\" (UID: \"9aac16aa-0990-4e14-a1db-e5abd9a92505\") " pod="calico-system/calico-kube-controllers-74b589d999-5tfgh" Nov 5 16:03:30.296916 kubelet[3273]: I1105 16:03:30.296320 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8zpw\" (UniqueName: \"kubernetes.io/projected/68251c55-f958-4ce6-8d9b-1ec5531fcb53-kube-api-access-l8zpw\") pod \"coredns-668d6bf9bc-snx9z\" (UID: \"68251c55-f958-4ce6-8d9b-1ec5531fcb53\") " pod="kube-system/coredns-668d6bf9bc-snx9z" Nov 5 16:03:30.296916 kubelet[3273]: I1105 16:03:30.296348 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2651b52f-bebf-4e7b-a8cc-451e0eb22851-whisker-backend-key-pair\") pod \"whisker-778445cdc8-pqppt\" (UID: \"2651b52f-bebf-4e7b-a8cc-451e0eb22851\") " pod="calico-system/whisker-778445cdc8-pqppt" Nov 5 16:03:30.296916 kubelet[3273]: I1105 16:03:30.296372 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8831874b-2bb6-46c1-a079-c45a246f51e1-goldmane-key-pair\") pod \"goldmane-666569f655-xbcp7\" (UID: \"8831874b-2bb6-46c1-a079-c45a246f51e1\") " pod="calico-system/goldmane-666569f655-xbcp7" Nov 5 16:03:30.296916 kubelet[3273]: I1105 16:03:30.296397 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7c9q\" (UniqueName: \"kubernetes.io/projected/8831874b-2bb6-46c1-a079-c45a246f51e1-kube-api-access-f7c9q\") pod \"goldmane-666569f655-xbcp7\" (UID: \"8831874b-2bb6-46c1-a079-c45a246f51e1\") " pod="calico-system/goldmane-666569f655-xbcp7" Nov 5 16:03:30.297789 kubelet[3273]: I1105 16:03:30.296568 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aac16aa-0990-4e14-a1db-e5abd9a92505-tigera-ca-bundle\") pod \"calico-kube-controllers-74b589d999-5tfgh\" (UID: \"9aac16aa-0990-4e14-a1db-e5abd9a92505\") " pod="calico-system/calico-kube-controllers-74b589d999-5tfgh" Nov 5 16:03:30.297789 kubelet[3273]: I1105 16:03:30.296599 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8h6z\" (UniqueName: \"kubernetes.io/projected/2651b52f-bebf-4e7b-a8cc-451e0eb22851-kube-api-access-r8h6z\") pod \"whisker-778445cdc8-pqppt\" (UID: \"2651b52f-bebf-4e7b-a8cc-451e0eb22851\") " pod="calico-system/whisker-778445cdc8-pqppt" Nov 5 16:03:30.297789 kubelet[3273]: I1105 16:03:30.296627 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d8tr\" (UniqueName: \"kubernetes.io/projected/97bb7728-1652-4f73-a3fd-5b00174bed72-kube-api-access-8d8tr\") pod \"calico-apiserver-6df446974d-5p6n9\" (UID: \"97bb7728-1652-4f73-a3fd-5b00174bed72\") " pod="calico-apiserver/calico-apiserver-6df446974d-5p6n9" Nov 5 16:03:30.297789 kubelet[3273]: I1105 16:03:30.296651 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae020f58-18ae-4ec2-9ce4-9d559dab8fbd-config-volume\") pod \"coredns-668d6bf9bc-xb5df\" (UID: \"ae020f58-18ae-4ec2-9ce4-9d559dab8fbd\") " pod="kube-system/coredns-668d6bf9bc-xb5df" Nov 5 16:03:30.297789 kubelet[3273]: I1105 16:03:30.296679 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7a4ffcd2-c3d0-43ff-8d92-50435ddcecef-calico-apiserver-certs\") pod \"calico-apiserver-6df446974d-wz89l\" (UID: \"7a4ffcd2-c3d0-43ff-8d92-50435ddcecef\") " pod="calico-apiserver/calico-apiserver-6df446974d-wz89l" Nov 5 16:03:30.300518 kubelet[3273]: I1105 16:03:30.296703 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2651b52f-bebf-4e7b-a8cc-451e0eb22851-whisker-ca-bundle\") pod \"whisker-778445cdc8-pqppt\" (UID: \"2651b52f-bebf-4e7b-a8cc-451e0eb22851\") " pod="calico-system/whisker-778445cdc8-pqppt" Nov 5 16:03:30.300518 kubelet[3273]: I1105 16:03:30.296728 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65zph\" (UniqueName: \"kubernetes.io/projected/ae020f58-18ae-4ec2-9ce4-9d559dab8fbd-kube-api-access-65zph\") pod \"coredns-668d6bf9bc-xb5df\" (UID: \"ae020f58-18ae-4ec2-9ce4-9d559dab8fbd\") " pod="kube-system/coredns-668d6bf9bc-xb5df" Nov 5 16:03:30.300518 kubelet[3273]: I1105 16:03:30.296761 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pg2j\" (UniqueName: \"kubernetes.io/projected/7a4ffcd2-c3d0-43ff-8d92-50435ddcecef-kube-api-access-9pg2j\") pod \"calico-apiserver-6df446974d-wz89l\" (UID: \"7a4ffcd2-c3d0-43ff-8d92-50435ddcecef\") " pod="calico-apiserver/calico-apiserver-6df446974d-wz89l" Nov 5 16:03:30.300518 kubelet[3273]: I1105 16:03:30.296786 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8831874b-2bb6-46c1-a079-c45a246f51e1-config\") pod \"goldmane-666569f655-xbcp7\" (UID: \"8831874b-2bb6-46c1-a079-c45a246f51e1\") " pod="calico-system/goldmane-666569f655-xbcp7" Nov 5 16:03:30.300518 kubelet[3273]: I1105 16:03:30.296812 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8831874b-2bb6-46c1-a079-c45a246f51e1-goldmane-ca-bundle\") pod \"goldmane-666569f655-xbcp7\" (UID: \"8831874b-2bb6-46c1-a079-c45a246f51e1\") " pod="calico-system/goldmane-666569f655-xbcp7" Nov 5 16:03:30.300742 kubelet[3273]: I1105 16:03:30.296970 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68251c55-f958-4ce6-8d9b-1ec5531fcb53-config-volume\") pod \"coredns-668d6bf9bc-snx9z\" (UID: \"68251c55-f958-4ce6-8d9b-1ec5531fcb53\") " pod="kube-system/coredns-668d6bf9bc-snx9z" Nov 5 16:03:30.300742 kubelet[3273]: I1105 16:03:30.296997 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/97bb7728-1652-4f73-a3fd-5b00174bed72-calico-apiserver-certs\") pod \"calico-apiserver-6df446974d-5p6n9\" (UID: \"97bb7728-1652-4f73-a3fd-5b00174bed72\") " pod="calico-apiserver/calico-apiserver-6df446974d-5p6n9" Nov 5 16:03:30.302575 systemd[1]: Created slice kubepods-besteffort-pod9aac16aa_0990_4e14_a1db_e5abd9a92505.slice - libcontainer container kubepods-besteffort-pod9aac16aa_0990_4e14_a1db_e5abd9a92505.slice. Nov 5 16:03:30.317160 systemd[1]: Created slice kubepods-burstable-podae020f58_18ae_4ec2_9ce4_9d559dab8fbd.slice - libcontainer container kubepods-burstable-podae020f58_18ae_4ec2_9ce4_9d559dab8fbd.slice. Nov 5 16:03:30.329388 systemd[1]: Created slice kubepods-besteffort-pod8831874b_2bb6_46c1_a079_c45a246f51e1.slice - libcontainer container kubepods-besteffort-pod8831874b_2bb6_46c1_a079_c45a246f51e1.slice. Nov 5 16:03:30.340303 systemd[1]: Created slice kubepods-besteffort-pod97bb7728_1652_4f73_a3fd_5b00174bed72.slice - libcontainer container kubepods-besteffort-pod97bb7728_1652_4f73_a3fd_5b00174bed72.slice. Nov 5 16:03:30.352954 systemd[1]: Created slice kubepods-besteffort-pod7a4ffcd2_c3d0_43ff_8d92_50435ddcecef.slice - libcontainer container kubepods-besteffort-pod7a4ffcd2_c3d0_43ff_8d92_50435ddcecef.slice. Nov 5 16:03:30.458069 containerd[1979]: time="2025-11-05T16:03:30.457633998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 16:03:30.538231 containerd[1979]: time="2025-11-05T16:03:30.538185957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-snx9z,Uid:68251c55-f958-4ce6-8d9b-1ec5531fcb53,Namespace:kube-system,Attempt:0,}" Nov 5 16:03:30.584585 containerd[1979]: time="2025-11-05T16:03:30.584357787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-778445cdc8-pqppt,Uid:2651b52f-bebf-4e7b-a8cc-451e0eb22851,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:30.613196 containerd[1979]: time="2025-11-05T16:03:30.612574565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b589d999-5tfgh,Uid:9aac16aa-0990-4e14-a1db-e5abd9a92505,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:30.626702 containerd[1979]: time="2025-11-05T16:03:30.626651087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xb5df,Uid:ae020f58-18ae-4ec2-9ce4-9d559dab8fbd,Namespace:kube-system,Attempt:0,}" Nov 5 16:03:30.636016 containerd[1979]: time="2025-11-05T16:03:30.635960692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xbcp7,Uid:8831874b-2bb6-46c1-a079-c45a246f51e1,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:30.650105 containerd[1979]: time="2025-11-05T16:03:30.649969762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df446974d-5p6n9,Uid:97bb7728-1652-4f73-a3fd-5b00174bed72,Namespace:calico-apiserver,Attempt:0,}" Nov 5 16:03:30.657295 containerd[1979]: time="2025-11-05T16:03:30.657253696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df446974d-wz89l,Uid:7a4ffcd2-c3d0-43ff-8d92-50435ddcecef,Namespace:calico-apiserver,Attempt:0,}" Nov 5 16:03:31.105169 systemd[1]: Created slice kubepods-besteffort-pod6e11cbb7_6c81_460e_9d02_0e852cdd8f6c.slice - libcontainer container kubepods-besteffort-pod6e11cbb7_6c81_460e_9d02_0e852cdd8f6c.slice. Nov 5 16:03:31.137396 containerd[1979]: time="2025-11-05T16:03:31.137108438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dsvvp,Uid:6e11cbb7-6c81-460e-9d02-0e852cdd8f6c,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:33.015295 containerd[1979]: time="2025-11-05T16:03:33.015134367Z" level=error msg="Failed to destroy network for sandbox \"405f30673d9b50abe7ecdd549bf3fa97cc2c6413f18460fbb01f3dbf176acc54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:33.020055 containerd[1979]: time="2025-11-05T16:03:33.019825835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xb5df,Uid:ae020f58-18ae-4ec2-9ce4-9d559dab8fbd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"405f30673d9b50abe7ecdd549bf3fa97cc2c6413f18460fbb01f3dbf176acc54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:33.020368 systemd[1]: run-netns-cni\x2d0886068d\x2d1316\x2d00ea\x2d4adb\x2d3376a9712396.mount: Deactivated successfully. Nov 5 16:03:33.039708 containerd[1979]: time="2025-11-05T16:03:33.039662204Z" level=error msg="Failed to destroy network for sandbox \"fc01fa3121208e2e62cafcbaa48c19f0a861322a59255bad41ef8c6e979d9757\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:33.044760 containerd[1979]: time="2025-11-05T16:03:33.044329645Z" level=error msg="Failed to destroy network for sandbox \"6ea98aaaa3dc3ebf7aae4293ede2f26eff1691cb4bcc7ab71f4c3a5dade3698d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:33.045631 systemd[1]: run-netns-cni\x2d6a685a3e\x2d40f8\x2dfdaf\x2d6a44\x2de7ecc488dacd.mount: Deactivated successfully. Nov 5 16:03:33.050154 containerd[1979]: time="2025-11-05T16:03:33.047762880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-snx9z,Uid:68251c55-f958-4ce6-8d9b-1ec5531fcb53,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ea98aaaa3dc3ebf7aae4293ede2f26eff1691cb4bcc7ab71f4c3a5dade3698d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:33.050333 kubelet[3273]: E1105 16:03:33.049899 3273 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"405f30673d9b50abe7ecdd549bf3fa97cc2c6413f18460fbb01f3dbf176acc54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:33.050333 kubelet[3273]: E1105 16:03:33.050003 3273 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"405f30673d9b50abe7ecdd549bf3fa97cc2c6413f18460fbb01f3dbf176acc54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xb5df" Nov 5 16:03:33.050333 kubelet[3273]: E1105 16:03:33.050054 3273 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"405f30673d9b50abe7ecdd549bf3fa97cc2c6413f18460fbb01f3dbf176acc54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xb5df" Nov 5 16:03:33.052274 containerd[1979]: time="2025-11-05T16:03:33.051627674Z" level=error msg="Failed to destroy network for sandbox \"e1a1f0fe2e68d0eb2094bf1ffc9cf70b89e0baecaf2902958496aca484ab6aa6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:33.053254 systemd[1]: run-netns-cni\x2df72dfad4\x2d76a9\x2d1484\x2d8eb1\x2d2da7160513d7.mount: Deactivated successfully. Nov 5 16:03:33.058080 containerd[1979]: time="2025-11-05T16:03:33.057651044Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b589d999-5tfgh,Uid:9aac16aa-0990-4e14-a1db-e5abd9a92505,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1a1f0fe2e68d0eb2094bf1ffc9cf70b89e0baecaf2902958496aca484ab6aa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:33.061799 containerd[1979]: time="2025-11-05T16:03:33.059050269Z" level=error msg="Failed to destroy network for sandbox \"77158a2a5fa69f9ff766db2441d862e6237a2f0c7aee4453ea95c1eeb72601b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:33.061799 containerd[1979]: time="2025-11-05T16:03:33.059932423Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-778445cdc8-pqppt,Uid:2651b52f-bebf-4e7b-a8cc-451e0eb22851,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc01fa3121208e2e62cafcbaa48c19f0a861322a59255bad41ef8c6e979d9757\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:33.061799 containerd[1979]: time="2025-11-05T16:03:33.059106820Z" level=error msg="Failed to destroy network for sandbox \"02b5ae28f1f05fc7908e2764e50501add5f7cb41a6a8587b50a96c729ef6eaf8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:33.061799 containerd[1979]: time="2025-11-05T16:03:33.061111189Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dsvvp,Uid:6e11cbb7-6c81-460e-9d02-0e852cdd8f6c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"02b5ae28f1f05fc7908e2764e50501add5f7cb41a6a8587b50a96c729ef6eaf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:33.061563 systemd[1]: run-netns-cni\x2da3373084\x2d76e7\x2db80f\x2da75a\x2dabe2a8322a18.mount: Deactivated successfully. Nov 5 16:03:33.062265 kubelet[3273]: E1105 16:03:33.061181 3273 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc01fa3121208e2e62cafcbaa48c19f0a861322a59255bad41ef8c6e979d9757\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:33.062265 kubelet[3273]: E1105 16:03:33.061248 3273 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc01fa3121208e2e62cafcbaa48c19f0a861322a59255bad41ef8c6e979d9757\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-778445cdc8-pqppt" Nov 5 16:03:33.062265 kubelet[3273]: E1105 16:03:33.061278 3273 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc01fa3121208e2e62cafcbaa48c19f0a861322a59255bad41ef8c6e979d9757\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-778445cdc8-pqppt" Nov 5 16:03:33.062424 kubelet[3273]: E1105 16:03:33.061345 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-778445cdc8-pqppt_calico-system(2651b52f-bebf-4e7b-a8cc-451e0eb22851)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-778445cdc8-pqppt_calico-system(2651b52f-bebf-4e7b-a8cc-451e0eb22851)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc01fa3121208e2e62cafcbaa48c19f0a861322a59255bad41ef8c6e979d9757\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-778445cdc8-pqppt" podUID="2651b52f-bebf-4e7b-a8cc-451e0eb22851" Nov 5 16:03:33.062424 kubelet[3273]: E1105 16:03:33.061402 3273 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ea98aaaa3dc3ebf7aae4293ede2f26eff1691cb4bcc7ab71f4c3a5dade3698d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:33.062424 kubelet[3273]: E1105 16:03:33.061425 3273 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ea98aaaa3dc3ebf7aae4293ede2f26eff1691cb4bcc7ab71f4c3a5dade3698d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-snx9z" Nov 5 16:03:33.062598 kubelet[3273]: E1105 16:03:33.061445 3273 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ea98aaaa3dc3ebf7aae4293ede2f26eff1691cb4bcc7ab71f4c3a5dade3698d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-snx9z" Nov 5 16:03:33.062598 kubelet[3273]: E1105 16:03:33.061479 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-snx9z_kube-system(68251c55-f958-4ce6-8d9b-1ec5531fcb53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-snx9z_kube-system(68251c55-f958-4ce6-8d9b-1ec5531fcb53)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ea98aaaa3dc3ebf7aae4293ede2f26eff1691cb4bcc7ab71f4c3a5dade3698d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-snx9z" podUID="68251c55-f958-4ce6-8d9b-1ec5531fcb53" Nov 5 16:03:33.062598 kubelet[3273]: E1105 16:03:33.061518 3273 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1a1f0fe2e68d0eb2094bf1ffc9cf70b89e0baecaf2902958496aca484ab6aa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:33.062745 kubelet[3273]: E1105 16:03:33.061539 3273 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1a1f0fe2e68d0eb2094bf1ffc9cf70b89e0baecaf2902958496aca484ab6aa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74b589d999-5tfgh" Nov 5 16:03:33.062745 kubelet[3273]: E1105 16:03:33.061575 3273 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1a1f0fe2e68d0eb2094bf1ffc9cf70b89e0baecaf2902958496aca484ab6aa6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74b589d999-5tfgh" Nov 5 16:03:33.062745 kubelet[3273]: E1105 16:03:33.061607 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74b589d999-5tfgh_calico-system(9aac16aa-0990-4e14-a1db-e5abd9a92505)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74b589d999-5tfgh_calico-system(9aac16aa-0990-4e14-a1db-e5abd9a92505)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1a1f0fe2e68d0eb2094bf1ffc9cf70b89e0baecaf2902958496aca484ab6aa6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74b589d999-5tfgh" podUID="9aac16aa-0990-4e14-a1db-e5abd9a92505" Nov 5 16:03:33.064787 kubelet[3273]: E1105 16:03:33.064474 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xb5df_kube-system(ae020f58-18ae-4ec2-9ce4-9d559dab8fbd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xb5df_kube-system(ae020f58-18ae-4ec2-9ce4-9d559dab8fbd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"405f30673d9b50abe7ecdd549bf3fa97cc2c6413f18460fbb01f3dbf176acc54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xb5df" podUID="ae020f58-18ae-4ec2-9ce4-9d559dab8fbd" Nov 5 16:03:33.064787 kubelet[3273]: E1105 16:03:33.064635 3273 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02b5ae28f1f05fc7908e2764e50501add5f7cb41a6a8587b50a96c729ef6eaf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:33.064787 kubelet[3273]: E1105 16:03:33.064675 3273 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02b5ae28f1f05fc7908e2764e50501add5f7cb41a6a8587b50a96c729ef6eaf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dsvvp" Nov 5 16:03:33.065067 kubelet[3273]: E1105 16:03:33.064700 3273 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02b5ae28f1f05fc7908e2764e50501add5f7cb41a6a8587b50a96c729ef6eaf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dsvvp" Nov 5 16:03:33.065067 kubelet[3273]: E1105 16:03:33.064741 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dsvvp_calico-system(6e11cbb7-6c81-460e-9d02-0e852cdd8f6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dsvvp_calico-system(6e11cbb7-6c81-460e-9d02-0e852cdd8f6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02b5ae28f1f05fc7908e2764e50501add5f7cb41a6a8587b50a96c729ef6eaf8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dsvvp" podUID="6e11cbb7-6c81-460e-9d02-0e852cdd8f6c" Nov 5 16:03:33.068800 containerd[1979]: time="2025-11-05T16:03:33.068350722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df446974d-wz89l,Uid:7a4ffcd2-c3d0-43ff-8d92-50435ddcecef,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"77158a2a5fa69f9ff766db2441d862e6237a2f0c7aee4453ea95c1eeb72601b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:33.068951 kubelet[3273]: E1105 16:03:33.068606 3273 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77158a2a5fa69f9ff766db2441d862e6237a2f0c7aee4453ea95c1eeb72601b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:33.068951 kubelet[3273]: E1105 16:03:33.068660 3273 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77158a2a5fa69f9ff766db2441d862e6237a2f0c7aee4453ea95c1eeb72601b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6df446974d-wz89l" Nov 5 16:03:33.068951 kubelet[3273]: E1105 16:03:33.068683 3273 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77158a2a5fa69f9ff766db2441d862e6237a2f0c7aee4453ea95c1eeb72601b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6df446974d-wz89l" Nov 5 16:03:33.070176 kubelet[3273]: E1105 16:03:33.068732 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6df446974d-wz89l_calico-apiserver(7a4ffcd2-c3d0-43ff-8d92-50435ddcecef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6df446974d-wz89l_calico-apiserver(7a4ffcd2-c3d0-43ff-8d92-50435ddcecef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77158a2a5fa69f9ff766db2441d862e6237a2f0c7aee4453ea95c1eeb72601b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6df446974d-wz89l" podUID="7a4ffcd2-c3d0-43ff-8d92-50435ddcecef" Nov 5 16:03:33.069175 systemd[1]: run-netns-cni\x2d4f245346\x2d02f4\x2daa32\x2d2107\x2d53735dc74808.mount: Deactivated successfully. Nov 5 16:03:33.069311 systemd[1]: run-netns-cni\x2df7d601e0\x2d8652\x2de080\x2d2223\x2dd58954409feb.mount: Deactivated successfully. Nov 5 16:03:33.070708 containerd[1979]: time="2025-11-05T16:03:33.070671207Z" level=error msg="Failed to destroy network for sandbox \"62fe6822b8dbfb39c2335ac97226b699d01cae3636e54ef2f61f45166b8b4071\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:33.075281 containerd[1979]: time="2025-11-05T16:03:33.075228901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xbcp7,Uid:8831874b-2bb6-46c1-a079-c45a246f51e1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62fe6822b8dbfb39c2335ac97226b699d01cae3636e54ef2f61f45166b8b4071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:33.076219 kubelet[3273]: E1105 16:03:33.075476 3273 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62fe6822b8dbfb39c2335ac97226b699d01cae3636e54ef2f61f45166b8b4071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:33.076219 kubelet[3273]: E1105 16:03:33.075534 3273 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62fe6822b8dbfb39c2335ac97226b699d01cae3636e54ef2f61f45166b8b4071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-xbcp7" Nov 5 16:03:33.076219 kubelet[3273]: E1105 16:03:33.075563 3273 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62fe6822b8dbfb39c2335ac97226b699d01cae3636e54ef2f61f45166b8b4071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-xbcp7" Nov 5 16:03:33.076378 kubelet[3273]: E1105 16:03:33.075613 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-xbcp7_calico-system(8831874b-2bb6-46c1-a079-c45a246f51e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-xbcp7_calico-system(8831874b-2bb6-46c1-a079-c45a246f51e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62fe6822b8dbfb39c2335ac97226b699d01cae3636e54ef2f61f45166b8b4071\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-xbcp7" podUID="8831874b-2bb6-46c1-a079-c45a246f51e1" Nov 5 16:03:33.077262 containerd[1979]: time="2025-11-05T16:03:33.077222479Z" level=error msg="Failed to destroy network for sandbox \"67d531f7d3c27bea0112f69590116f1195e5cc1fb8fb5ca38ebdf3cc54db9686\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:33.079017 containerd[1979]: time="2025-11-05T16:03:33.078968630Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df446974d-5p6n9,Uid:97bb7728-1652-4f73-a3fd-5b00174bed72,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"67d531f7d3c27bea0112f69590116f1195e5cc1fb8fb5ca38ebdf3cc54db9686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:33.080199 kubelet[3273]: E1105 16:03:33.079312 3273 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67d531f7d3c27bea0112f69590116f1195e5cc1fb8fb5ca38ebdf3cc54db9686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:33.080199 kubelet[3273]: E1105 16:03:33.079473 3273 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67d531f7d3c27bea0112f69590116f1195e5cc1fb8fb5ca38ebdf3cc54db9686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6df446974d-5p6n9" Nov 5 16:03:33.080199 kubelet[3273]: E1105 16:03:33.079550 3273 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67d531f7d3c27bea0112f69590116f1195e5cc1fb8fb5ca38ebdf3cc54db9686\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6df446974d-5p6n9" Nov 5 16:03:33.080656 kubelet[3273]: E1105 16:03:33.079602 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6df446974d-5p6n9_calico-apiserver(97bb7728-1652-4f73-a3fd-5b00174bed72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6df446974d-5p6n9_calico-apiserver(97bb7728-1652-4f73-a3fd-5b00174bed72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67d531f7d3c27bea0112f69590116f1195e5cc1fb8fb5ca38ebdf3cc54db9686\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6df446974d-5p6n9" podUID="97bb7728-1652-4f73-a3fd-5b00174bed72" Nov 5 16:03:34.017647 systemd[1]: run-netns-cni\x2d4c74257e\x2d264a\x2d87a2\x2d1cdd\x2d047fad22324d.mount: Deactivated successfully. Nov 5 16:03:34.017760 systemd[1]: run-netns-cni\x2da1373cf7\x2d6396\x2df4dd\x2d61dc\x2d1df1eb0cb22b.mount: Deactivated successfully. Nov 5 16:03:38.560120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4052479910.mount: Deactivated successfully. Nov 5 16:03:38.602049 containerd[1979]: time="2025-11-05T16:03:38.601974554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:38.604259 containerd[1979]: time="2025-11-05T16:03:38.604217003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 5 16:03:38.631793 containerd[1979]: time="2025-11-05T16:03:38.631747253Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:38.637222 containerd[1979]: time="2025-11-05T16:03:38.637169407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:38.640401 containerd[1979]: time="2025-11-05T16:03:38.640357676Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.180233696s" Nov 5 16:03:38.640401 containerd[1979]: time="2025-11-05T16:03:38.640496046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 5 16:03:38.672591 containerd[1979]: time="2025-11-05T16:03:38.672465565Z" level=info msg="CreateContainer within sandbox \"5caedc7820b2a219181cd040b9908b6713e22345f136613d84275ce9ffc0a274\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 16:03:38.747073 containerd[1979]: time="2025-11-05T16:03:38.746749689Z" level=info msg="Container 6e97dcc3fd4ab92721f81610c69d03ab92a58fe651e3a44a8865f3f1df18ca6c: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:03:38.747065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount99651083.mount: Deactivated successfully. Nov 5 16:03:38.772805 containerd[1979]: time="2025-11-05T16:03:38.772761575Z" level=info msg="CreateContainer within sandbox \"5caedc7820b2a219181cd040b9908b6713e22345f136613d84275ce9ffc0a274\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6e97dcc3fd4ab92721f81610c69d03ab92a58fe651e3a44a8865f3f1df18ca6c\"" Nov 5 16:03:38.773445 containerd[1979]: time="2025-11-05T16:03:38.773341978Z" level=info msg="StartContainer for \"6e97dcc3fd4ab92721f81610c69d03ab92a58fe651e3a44a8865f3f1df18ca6c\"" Nov 5 16:03:38.779253 containerd[1979]: time="2025-11-05T16:03:38.779199614Z" level=info msg="connecting to shim 6e97dcc3fd4ab92721f81610c69d03ab92a58fe651e3a44a8865f3f1df18ca6c" address="unix:///run/containerd/s/fd23c18b48c156e06fc85899465c07bd015ad1e0b0849c2ca820633d899e5f3b" protocol=ttrpc version=3 Nov 5 16:03:38.963314 systemd[1]: Started cri-containerd-6e97dcc3fd4ab92721f81610c69d03ab92a58fe651e3a44a8865f3f1df18ca6c.scope - libcontainer container 6e97dcc3fd4ab92721f81610c69d03ab92a58fe651e3a44a8865f3f1df18ca6c. Nov 5 16:03:39.059410 containerd[1979]: time="2025-11-05T16:03:39.059359301Z" level=info msg="StartContainer for \"6e97dcc3fd4ab92721f81610c69d03ab92a58fe651e3a44a8865f3f1df18ca6c\" returns successfully" Nov 5 16:03:41.559443 kubelet[3273]: I1105 16:03:41.559389 3273 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 16:03:41.802912 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 16:03:41.821122 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 16:03:41.950892 containerd[1979]: time="2025-11-05T16:03:41.950854304Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e97dcc3fd4ab92721f81610c69d03ab92a58fe651e3a44a8865f3f1df18ca6c\" id:\"bb45f9fb07ae3c2453835e8509e16dec93d54bea862a95f2744224eb260273ce\" pid:4478 exit_status:1 exited_at:{seconds:1762358621 nanos:950563955}" Nov 5 16:03:42.198186 containerd[1979]: time="2025-11-05T16:03:42.198135954Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e97dcc3fd4ab92721f81610c69d03ab92a58fe651e3a44a8865f3f1df18ca6c\" id:\"65dba2350baca5abad2b4ad8351acd60a48b45f2a6ee207bf23e7ab71c1dfa23\" pid:4514 exit_status:1 exited_at:{seconds:1762358622 nanos:197847331}" Nov 5 16:03:42.676993 kubelet[3273]: I1105 16:03:42.676920 3273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-k56kf" podStartSLOduration=4.897471565 podStartE2EDuration="25.671792425s" podCreationTimestamp="2025-11-05 16:03:17 +0000 UTC" firstStartedPulling="2025-11-05 16:03:17.867110977 +0000 UTC m=+52.995735244" lastFinishedPulling="2025-11-05 16:03:38.641431836 +0000 UTC m=+73.770056104" observedRunningTime="2025-11-05 16:03:39.62150744 +0000 UTC m=+74.750131720" watchObservedRunningTime="2025-11-05 16:03:42.671792425 +0000 UTC m=+77.800416729" Nov 5 16:03:42.823208 kubelet[3273]: I1105 16:03:42.823139 3273 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8h6z\" (UniqueName: \"kubernetes.io/projected/2651b52f-bebf-4e7b-a8cc-451e0eb22851-kube-api-access-r8h6z\") pod \"2651b52f-bebf-4e7b-a8cc-451e0eb22851\" (UID: \"2651b52f-bebf-4e7b-a8cc-451e0eb22851\") " Nov 5 16:03:42.827661 kubelet[3273]: I1105 16:03:42.827222 3273 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2651b52f-bebf-4e7b-a8cc-451e0eb22851-whisker-ca-bundle\") pod \"2651b52f-bebf-4e7b-a8cc-451e0eb22851\" (UID: \"2651b52f-bebf-4e7b-a8cc-451e0eb22851\") " Nov 5 16:03:42.827661 kubelet[3273]: I1105 16:03:42.827274 3273 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2651b52f-bebf-4e7b-a8cc-451e0eb22851-whisker-backend-key-pair\") pod \"2651b52f-bebf-4e7b-a8cc-451e0eb22851\" (UID: \"2651b52f-bebf-4e7b-a8cc-451e0eb22851\") " Nov 5 16:03:42.830636 kubelet[3273]: I1105 16:03:42.830601 3273 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2651b52f-bebf-4e7b-a8cc-451e0eb22851-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2651b52f-bebf-4e7b-a8cc-451e0eb22851" (UID: "2651b52f-bebf-4e7b-a8cc-451e0eb22851"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 16:03:42.833382 systemd[1]: var-lib-kubelet-pods-2651b52f\x2dbebf\x2d4e7b\x2da8cc\x2d451e0eb22851-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr8h6z.mount: Deactivated successfully. Nov 5 16:03:42.838752 kubelet[3273]: I1105 16:03:42.838299 3273 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2651b52f-bebf-4e7b-a8cc-451e0eb22851-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2651b52f-bebf-4e7b-a8cc-451e0eb22851" (UID: "2651b52f-bebf-4e7b-a8cc-451e0eb22851"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 16:03:42.839168 systemd[1]: var-lib-kubelet-pods-2651b52f\x2dbebf\x2d4e7b\x2da8cc\x2d451e0eb22851-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 16:03:42.845762 kubelet[3273]: I1105 16:03:42.845514 3273 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2651b52f-bebf-4e7b-a8cc-451e0eb22851-kube-api-access-r8h6z" (OuterVolumeSpecName: "kube-api-access-r8h6z") pod "2651b52f-bebf-4e7b-a8cc-451e0eb22851" (UID: "2651b52f-bebf-4e7b-a8cc-451e0eb22851"). InnerVolumeSpecName "kube-api-access-r8h6z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 16:03:42.930793 kubelet[3273]: I1105 16:03:42.930395 3273 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r8h6z\" (UniqueName: \"kubernetes.io/projected/2651b52f-bebf-4e7b-a8cc-451e0eb22851-kube-api-access-r8h6z\") on node \"ip-172-31-17-172\" DevicePath \"\"" Nov 5 16:03:42.930793 kubelet[3273]: I1105 16:03:42.930464 3273 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2651b52f-bebf-4e7b-a8cc-451e0eb22851-whisker-ca-bundle\") on node \"ip-172-31-17-172\" DevicePath \"\"" Nov 5 16:03:42.930793 kubelet[3273]: I1105 16:03:42.930477 3273 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2651b52f-bebf-4e7b-a8cc-451e0eb22851-whisker-backend-key-pair\") on node \"ip-172-31-17-172\" DevicePath \"\"" Nov 5 16:03:43.136198 systemd[1]: Removed slice kubepods-besteffort-pod2651b52f_bebf_4e7b_a8cc_451e0eb22851.slice - libcontainer container kubepods-besteffort-pod2651b52f_bebf_4e7b_a8cc_451e0eb22851.slice. Nov 5 16:03:44.097240 containerd[1979]: time="2025-11-05T16:03:44.097100654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b589d999-5tfgh,Uid:9aac16aa-0990-4e14-a1db-e5abd9a92505,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:44.109692 systemd[1]: Created slice kubepods-besteffort-pod5ed90129_d345_48e7_a043_180d8e15dcce.slice - libcontainer container kubepods-besteffort-pod5ed90129_d345_48e7_a043_180d8e15dcce.slice. Nov 5 16:03:44.144047 kubelet[3273]: I1105 16:03:44.143272 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scrbf\" (UniqueName: \"kubernetes.io/projected/5ed90129-d345-48e7-a043-180d8e15dcce-kube-api-access-scrbf\") pod \"whisker-c574dd99b-btm8k\" (UID: \"5ed90129-d345-48e7-a043-180d8e15dcce\") " pod="calico-system/whisker-c574dd99b-btm8k" Nov 5 16:03:44.144047 kubelet[3273]: I1105 16:03:44.143352 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5ed90129-d345-48e7-a043-180d8e15dcce-whisker-backend-key-pair\") pod \"whisker-c574dd99b-btm8k\" (UID: \"5ed90129-d345-48e7-a043-180d8e15dcce\") " pod="calico-system/whisker-c574dd99b-btm8k" Nov 5 16:03:44.144047 kubelet[3273]: I1105 16:03:44.143407 3273 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ed90129-d345-48e7-a043-180d8e15dcce-whisker-ca-bundle\") pod \"whisker-c574dd99b-btm8k\" (UID: \"5ed90129-d345-48e7-a043-180d8e15dcce\") " pod="calico-system/whisker-c574dd99b-btm8k" Nov 5 16:03:44.439353 containerd[1979]: time="2025-11-05T16:03:44.439298218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c574dd99b-btm8k,Uid:5ed90129-d345-48e7-a043-180d8e15dcce,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:44.811047 systemd-networkd[1567]: vxlan.calico: Link UP Nov 5 16:03:44.811056 systemd-networkd[1567]: vxlan.calico: Gained carrier Nov 5 16:03:44.813499 (udev-worker)[4496]: Network interface NamePolicy= disabled on kernel command line. Nov 5 16:03:44.840046 (udev-worker)[4729]: Network interface NamePolicy= disabled on kernel command line. Nov 5 16:03:45.100959 containerd[1979]: time="2025-11-05T16:03:45.100592771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dsvvp,Uid:6e11cbb7-6c81-460e-9d02-0e852cdd8f6c,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:45.102333 containerd[1979]: time="2025-11-05T16:03:45.101509744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xbcp7,Uid:8831874b-2bb6-46c1-a079-c45a246f51e1,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:45.102333 containerd[1979]: time="2025-11-05T16:03:45.101704747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xb5df,Uid:ae020f58-18ae-4ec2-9ce4-9d559dab8fbd,Namespace:kube-system,Attempt:0,}" Nov 5 16:03:45.122413 kubelet[3273]: I1105 16:03:45.122363 3273 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2651b52f-bebf-4e7b-a8cc-451e0eb22851" path="/var/lib/kubelet/pods/2651b52f-bebf-4e7b-a8cc-451e0eb22851/volumes" Nov 5 16:03:46.476197 systemd-networkd[1567]: vxlan.calico: Gained IPv6LL Nov 5 16:03:47.098174 containerd[1979]: time="2025-11-05T16:03:47.097605938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df446974d-5p6n9,Uid:97bb7728-1652-4f73-a3fd-5b00174bed72,Namespace:calico-apiserver,Attempt:0,}" Nov 5 16:03:47.098174 containerd[1979]: time="2025-11-05T16:03:47.097677803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-snx9z,Uid:68251c55-f958-4ce6-8d9b-1ec5531fcb53,Namespace:kube-system,Attempt:0,}" Nov 5 16:03:48.098214 containerd[1979]: time="2025-11-05T16:03:48.097511080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df446974d-wz89l,Uid:7a4ffcd2-c3d0-43ff-8d92-50435ddcecef,Namespace:calico-apiserver,Attempt:0,}" Nov 5 16:03:48.142035 systemd-networkd[1567]: cali69bcc6e0fe8: Link UP Nov 5 16:03:48.147721 systemd-networkd[1567]: cali69bcc6e0fe8: Gained carrier Nov 5 16:03:48.235747 containerd[1979]: 2025-11-05 16:03:47.191 [INFO][4830] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--172-k8s-coredns--668d6bf9bc--snx9z-eth0 coredns-668d6bf9bc- kube-system 68251c55-f958-4ce6-8d9b-1ec5531fcb53 846 0 2025-11-05 16:02:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-172 coredns-668d6bf9bc-snx9z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali69bcc6e0fe8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e" Namespace="kube-system" Pod="coredns-668d6bf9bc-snx9z" WorkloadEndpoint="ip--172--31--17--172-k8s-coredns--668d6bf9bc--snx9z-" Nov 5 16:03:48.235747 containerd[1979]: 2025-11-05 16:03:47.192 [INFO][4830] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e" Namespace="kube-system" Pod="coredns-668d6bf9bc-snx9z" WorkloadEndpoint="ip--172--31--17--172-k8s-coredns--668d6bf9bc--snx9z-eth0" Nov 5 16:03:48.235747 containerd[1979]: 2025-11-05 16:03:47.982 [INFO][4853] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e" HandleID="k8s-pod-network.fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e" Workload="ip--172--31--17--172-k8s-coredns--668d6bf9bc--snx9z-eth0" Nov 5 16:03:48.242260 containerd[1979]: 2025-11-05 16:03:47.983 [INFO][4853] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e" HandleID="k8s-pod-network.fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e" Workload="ip--172--31--17--172-k8s-coredns--668d6bf9bc--snx9z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5440), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-172", "pod":"coredns-668d6bf9bc-snx9z", "timestamp":"2025-11-05 16:03:47.982878604 +0000 UTC"}, Hostname:"ip-172-31-17-172", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:03:48.242260 containerd[1979]: 2025-11-05 16:03:47.983 [INFO][4853] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:03:48.242260 containerd[1979]: 2025-11-05 16:03:47.983 [INFO][4853] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:03:48.242260 containerd[1979]: 2025-11-05 16:03:47.984 [INFO][4853] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-172' Nov 5 16:03:48.242260 containerd[1979]: 2025-11-05 16:03:47.996 [INFO][4853] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e" host="ip-172-31-17-172" Nov 5 16:03:48.242260 containerd[1979]: 2025-11-05 16:03:48.060 [INFO][4853] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-172" Nov 5 16:03:48.242260 containerd[1979]: 2025-11-05 16:03:48.068 [INFO][4853] ipam/ipam.go 511: Trying affinity for 192.168.62.64/26 host="ip-172-31-17-172" Nov 5 16:03:48.242260 containerd[1979]: 2025-11-05 16:03:48.073 [INFO][4853] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.64/26 host="ip-172-31-17-172" Nov 5 16:03:48.242260 containerd[1979]: 2025-11-05 16:03:48.076 [INFO][4853] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.64/26 host="ip-172-31-17-172" Nov 5 16:03:48.247927 containerd[1979]: 2025-11-05 16:03:48.077 [INFO][4853] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.62.64/26 handle="k8s-pod-network.fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e" host="ip-172-31-17-172" Nov 5 16:03:48.247927 containerd[1979]: 2025-11-05 16:03:48.079 [INFO][4853] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e Nov 5 16:03:48.247927 containerd[1979]: 2025-11-05 16:03:48.085 [INFO][4853] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.62.64/26 handle="k8s-pod-network.fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e" host="ip-172-31-17-172" Nov 5 16:03:48.247927 containerd[1979]: 2025-11-05 16:03:48.096 [INFO][4853] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.62.65/26] block=192.168.62.64/26 handle="k8s-pod-network.fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e" host="ip-172-31-17-172" Nov 5 16:03:48.247927 containerd[1979]: 2025-11-05 16:03:48.097 [INFO][4853] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.65/26] handle="k8s-pod-network.fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e" host="ip-172-31-17-172" Nov 5 16:03:48.247927 containerd[1979]: 2025-11-05 16:03:48.097 [INFO][4853] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:03:48.247927 containerd[1979]: 2025-11-05 16:03:48.097 [INFO][4853] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.62.65/26] IPv6=[] ContainerID="fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e" HandleID="k8s-pod-network.fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e" Workload="ip--172--31--17--172-k8s-coredns--668d6bf9bc--snx9z-eth0" Nov 5 16:03:48.248821 containerd[1979]: 2025-11-05 16:03:48.110 [INFO][4830] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e" Namespace="kube-system" Pod="coredns-668d6bf9bc-snx9z" WorkloadEndpoint="ip--172--31--17--172-k8s-coredns--668d6bf9bc--snx9z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--172-k8s-coredns--668d6bf9bc--snx9z-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"68251c55-f958-4ce6-8d9b-1ec5531fcb53", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-172", ContainerID:"", Pod:"coredns-668d6bf9bc-snx9z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69bcc6e0fe8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:48.248821 containerd[1979]: 2025-11-05 16:03:48.112 [INFO][4830] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.65/32] ContainerID="fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e" Namespace="kube-system" Pod="coredns-668d6bf9bc-snx9z" WorkloadEndpoint="ip--172--31--17--172-k8s-coredns--668d6bf9bc--snx9z-eth0" Nov 5 16:03:48.248821 containerd[1979]: 2025-11-05 16:03:48.113 [INFO][4830] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali69bcc6e0fe8 ContainerID="fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e" Namespace="kube-system" Pod="coredns-668d6bf9bc-snx9z" WorkloadEndpoint="ip--172--31--17--172-k8s-coredns--668d6bf9bc--snx9z-eth0" Nov 5 16:03:48.248821 containerd[1979]: 2025-11-05 16:03:48.153 [INFO][4830] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e" Namespace="kube-system" Pod="coredns-668d6bf9bc-snx9z" WorkloadEndpoint="ip--172--31--17--172-k8s-coredns--668d6bf9bc--snx9z-eth0" Nov 5 16:03:48.248821 containerd[1979]: 2025-11-05 16:03:48.167 [INFO][4830] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e" Namespace="kube-system" Pod="coredns-668d6bf9bc-snx9z" WorkloadEndpoint="ip--172--31--17--172-k8s-coredns--668d6bf9bc--snx9z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--172-k8s-coredns--668d6bf9bc--snx9z-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"68251c55-f958-4ce6-8d9b-1ec5531fcb53", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-172", ContainerID:"fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e", Pod:"coredns-668d6bf9bc-snx9z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69bcc6e0fe8", MAC:"de:94:92:9f:4f:69", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:48.248821 containerd[1979]: 2025-11-05 16:03:48.209 [INFO][4830] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e" Namespace="kube-system" Pod="coredns-668d6bf9bc-snx9z" WorkloadEndpoint="ip--172--31--17--172-k8s-coredns--668d6bf9bc--snx9z-eth0" Nov 5 16:03:48.251342 systemd-networkd[1567]: cali82050a39d04: Link UP Nov 5 16:03:48.261369 systemd-networkd[1567]: cali82050a39d04: Gained carrier Nov 5 16:03:48.293379 containerd[1979]: 2025-11-05 16:03:47.191 [INFO][4823] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--172-k8s-calico--apiserver--6df446974d--5p6n9-eth0 calico-apiserver-6df446974d- calico-apiserver 97bb7728-1652-4f73-a3fd-5b00174bed72 858 0 2025-11-05 16:03:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6df446974d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-172 calico-apiserver-6df446974d-5p6n9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali82050a39d04 [] [] }} ContainerID="a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6" Namespace="calico-apiserver" Pod="calico-apiserver-6df446974d-5p6n9" WorkloadEndpoint="ip--172--31--17--172-k8s-calico--apiserver--6df446974d--5p6n9-" Nov 5 16:03:48.293379 containerd[1979]: 2025-11-05 16:03:47.192 [INFO][4823] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6" Namespace="calico-apiserver" Pod="calico-apiserver-6df446974d-5p6n9" WorkloadEndpoint="ip--172--31--17--172-k8s-calico--apiserver--6df446974d--5p6n9-eth0" Nov 5 16:03:48.293379 containerd[1979]: 2025-11-05 16:03:47.981 [INFO][4851] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6" HandleID="k8s-pod-network.a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6" Workload="ip--172--31--17--172-k8s-calico--apiserver--6df446974d--5p6n9-eth0" Nov 5 16:03:48.293379 containerd[1979]: 2025-11-05 16:03:47.982 [INFO][4851] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6" HandleID="k8s-pod-network.a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6" Workload="ip--172--31--17--172-k8s-calico--apiserver--6df446974d--5p6n9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d53c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-17-172", "pod":"calico-apiserver-6df446974d-5p6n9", "timestamp":"2025-11-05 16:03:47.981550663 +0000 UTC"}, Hostname:"ip-172-31-17-172", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:03:48.293379 containerd[1979]: 2025-11-05 16:03:47.982 [INFO][4851] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:03:48.293379 containerd[1979]: 2025-11-05 16:03:48.097 [INFO][4851] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:03:48.293379 containerd[1979]: 2025-11-05 16:03:48.097 [INFO][4851] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-172' Nov 5 16:03:48.293379 containerd[1979]: 2025-11-05 16:03:48.135 [INFO][4851] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6" host="ip-172-31-17-172" Nov 5 16:03:48.293379 containerd[1979]: 2025-11-05 16:03:48.167 [INFO][4851] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-172" Nov 5 16:03:48.293379 containerd[1979]: 2025-11-05 16:03:48.179 [INFO][4851] ipam/ipam.go 511: Trying affinity for 192.168.62.64/26 host="ip-172-31-17-172" Nov 5 16:03:48.293379 containerd[1979]: 2025-11-05 16:03:48.182 [INFO][4851] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.64/26 host="ip-172-31-17-172" Nov 5 16:03:48.293379 containerd[1979]: 2025-11-05 16:03:48.186 [INFO][4851] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.64/26 host="ip-172-31-17-172" Nov 5 16:03:48.293379 containerd[1979]: 2025-11-05 16:03:48.186 [INFO][4851] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.62.64/26 handle="k8s-pod-network.a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6" host="ip-172-31-17-172" Nov 5 16:03:48.293379 containerd[1979]: 2025-11-05 16:03:48.188 [INFO][4851] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6 Nov 5 16:03:48.293379 containerd[1979]: 2025-11-05 16:03:48.195 [INFO][4851] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.62.64/26 handle="k8s-pod-network.a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6" host="ip-172-31-17-172" Nov 5 16:03:48.293379 containerd[1979]: 2025-11-05 16:03:48.212 [INFO][4851] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.62.66/26] block=192.168.62.64/26 handle="k8s-pod-network.a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6" host="ip-172-31-17-172" Nov 5 16:03:48.293379 containerd[1979]: 2025-11-05 16:03:48.212 [INFO][4851] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.66/26] handle="k8s-pod-network.a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6" host="ip-172-31-17-172" Nov 5 16:03:48.293379 containerd[1979]: 2025-11-05 16:03:48.213 [INFO][4851] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:03:48.293379 containerd[1979]: 2025-11-05 16:03:48.213 [INFO][4851] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.62.66/26] IPv6=[] ContainerID="a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6" HandleID="k8s-pod-network.a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6" Workload="ip--172--31--17--172-k8s-calico--apiserver--6df446974d--5p6n9-eth0" Nov 5 16:03:48.296490 containerd[1979]: 2025-11-05 16:03:48.225 [INFO][4823] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6" Namespace="calico-apiserver" Pod="calico-apiserver-6df446974d-5p6n9" WorkloadEndpoint="ip--172--31--17--172-k8s-calico--apiserver--6df446974d--5p6n9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--172-k8s-calico--apiserver--6df446974d--5p6n9-eth0", GenerateName:"calico-apiserver-6df446974d-", Namespace:"calico-apiserver", SelfLink:"", UID:"97bb7728-1652-4f73-a3fd-5b00174bed72", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df446974d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-172", ContainerID:"", Pod:"calico-apiserver-6df446974d-5p6n9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82050a39d04", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:48.296490 containerd[1979]: 2025-11-05 16:03:48.225 [INFO][4823] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.66/32] ContainerID="a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6" Namespace="calico-apiserver" Pod="calico-apiserver-6df446974d-5p6n9" WorkloadEndpoint="ip--172--31--17--172-k8s-calico--apiserver--6df446974d--5p6n9-eth0" Nov 5 16:03:48.296490 containerd[1979]: 2025-11-05 16:03:48.225 [INFO][4823] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali82050a39d04 ContainerID="a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6" Namespace="calico-apiserver" Pod="calico-apiserver-6df446974d-5p6n9" WorkloadEndpoint="ip--172--31--17--172-k8s-calico--apiserver--6df446974d--5p6n9-eth0" Nov 5 16:03:48.296490 containerd[1979]: 2025-11-05 16:03:48.262 [INFO][4823] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6" Namespace="calico-apiserver" Pod="calico-apiserver-6df446974d-5p6n9" WorkloadEndpoint="ip--172--31--17--172-k8s-calico--apiserver--6df446974d--5p6n9-eth0" Nov 5 16:03:48.296490 containerd[1979]: 2025-11-05 16:03:48.267 [INFO][4823] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6" Namespace="calico-apiserver" Pod="calico-apiserver-6df446974d-5p6n9" WorkloadEndpoint="ip--172--31--17--172-k8s-calico--apiserver--6df446974d--5p6n9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--172-k8s-calico--apiserver--6df446974d--5p6n9-eth0", GenerateName:"calico-apiserver-6df446974d-", Namespace:"calico-apiserver", SelfLink:"", UID:"97bb7728-1652-4f73-a3fd-5b00174bed72", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df446974d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-172", ContainerID:"a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6", Pod:"calico-apiserver-6df446974d-5p6n9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82050a39d04", MAC:"ce:f9:c4:4d:7f:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:48.296490 containerd[1979]: 2025-11-05 16:03:48.285 [INFO][4823] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6" Namespace="calico-apiserver" Pod="calico-apiserver-6df446974d-5p6n9" WorkloadEndpoint="ip--172--31--17--172-k8s-calico--apiserver--6df446974d--5p6n9-eth0" Nov 5 16:03:48.349905 systemd-networkd[1567]: cali8deb9d998ff: Link UP Nov 5 16:03:48.354187 systemd-networkd[1567]: cali8deb9d998ff: Gained carrier Nov 5 16:03:48.390155 containerd[1979]: 2025-11-05 16:03:45.360 [INFO][4750] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--172-k8s-coredns--668d6bf9bc--xb5df-eth0 coredns-668d6bf9bc- kube-system ae020f58-18ae-4ec2-9ce4-9d559dab8fbd 857 0 2025-11-05 16:02:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-172 coredns-668d6bf9bc-xb5df eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8deb9d998ff [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-xb5df" WorkloadEndpoint="ip--172--31--17--172-k8s-coredns--668d6bf9bc--xb5df-" Nov 5 16:03:48.390155 containerd[1979]: 2025-11-05 16:03:45.360 [INFO][4750] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-xb5df" WorkloadEndpoint="ip--172--31--17--172-k8s-coredns--668d6bf9bc--xb5df-eth0" Nov 5 16:03:48.390155 containerd[1979]: 2025-11-05 16:03:47.981 [INFO][4814] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7" HandleID="k8s-pod-network.f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7" Workload="ip--172--31--17--172-k8s-coredns--668d6bf9bc--xb5df-eth0" Nov 5 16:03:48.390155 containerd[1979]: 2025-11-05 16:03:47.983 [INFO][4814] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7" HandleID="k8s-pod-network.f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7" Workload="ip--172--31--17--172-k8s-coredns--668d6bf9bc--xb5df-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000118780), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-172", "pod":"coredns-668d6bf9bc-xb5df", "timestamp":"2025-11-05 16:03:47.981605189 +0000 UTC"}, Hostname:"ip-172-31-17-172", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:03:48.390155 containerd[1979]: 2025-11-05 16:03:47.983 [INFO][4814] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:03:48.390155 containerd[1979]: 2025-11-05 16:03:48.213 [INFO][4814] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:03:48.390155 containerd[1979]: 2025-11-05 16:03:48.213 [INFO][4814] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-172' Nov 5 16:03:48.390155 containerd[1979]: 2025-11-05 16:03:48.247 [INFO][4814] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7" host="ip-172-31-17-172" Nov 5 16:03:48.390155 containerd[1979]: 2025-11-05 16:03:48.269 [INFO][4814] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-172" Nov 5 16:03:48.390155 containerd[1979]: 2025-11-05 16:03:48.281 [INFO][4814] ipam/ipam.go 511: Trying affinity for 192.168.62.64/26 host="ip-172-31-17-172" Nov 5 16:03:48.390155 containerd[1979]: 2025-11-05 16:03:48.289 [INFO][4814] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.64/26 host="ip-172-31-17-172" Nov 5 16:03:48.390155 containerd[1979]: 2025-11-05 16:03:48.299 [INFO][4814] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.64/26 host="ip-172-31-17-172" Nov 5 16:03:48.390155 containerd[1979]: 2025-11-05 16:03:48.299 [INFO][4814] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.62.64/26 handle="k8s-pod-network.f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7" host="ip-172-31-17-172" Nov 5 16:03:48.390155 containerd[1979]: 2025-11-05 16:03:48.303 [INFO][4814] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7 Nov 5 16:03:48.390155 containerd[1979]: 2025-11-05 16:03:48.316 [INFO][4814] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.62.64/26 handle="k8s-pod-network.f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7" host="ip-172-31-17-172" Nov 5 16:03:48.390155 containerd[1979]: 2025-11-05 16:03:48.329 [INFO][4814] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.62.67/26] block=192.168.62.64/26 handle="k8s-pod-network.f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7" host="ip-172-31-17-172" Nov 5 16:03:48.390155 containerd[1979]: 2025-11-05 16:03:48.329 [INFO][4814] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.67/26] handle="k8s-pod-network.f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7" host="ip-172-31-17-172" Nov 5 16:03:48.390155 containerd[1979]: 2025-11-05 16:03:48.329 [INFO][4814] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:03:48.390155 containerd[1979]: 2025-11-05 16:03:48.329 [INFO][4814] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.62.67/26] IPv6=[] ContainerID="f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7" HandleID="k8s-pod-network.f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7" Workload="ip--172--31--17--172-k8s-coredns--668d6bf9bc--xb5df-eth0" Nov 5 16:03:48.393463 containerd[1979]: 2025-11-05 16:03:48.336 [INFO][4750] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-xb5df" WorkloadEndpoint="ip--172--31--17--172-k8s-coredns--668d6bf9bc--xb5df-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--172-k8s-coredns--668d6bf9bc--xb5df-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ae020f58-18ae-4ec2-9ce4-9d559dab8fbd", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-172", ContainerID:"", Pod:"coredns-668d6bf9bc-xb5df", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8deb9d998ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:48.393463 containerd[1979]: 2025-11-05 16:03:48.337 [INFO][4750] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.67/32] ContainerID="f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-xb5df" WorkloadEndpoint="ip--172--31--17--172-k8s-coredns--668d6bf9bc--xb5df-eth0" Nov 5 16:03:48.393463 containerd[1979]: 2025-11-05 16:03:48.337 [INFO][4750] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8deb9d998ff ContainerID="f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-xb5df" WorkloadEndpoint="ip--172--31--17--172-k8s-coredns--668d6bf9bc--xb5df-eth0" Nov 5 16:03:48.393463 containerd[1979]: 2025-11-05 16:03:48.357 [INFO][4750] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-xb5df" WorkloadEndpoint="ip--172--31--17--172-k8s-coredns--668d6bf9bc--xb5df-eth0" Nov 5 16:03:48.393463 containerd[1979]: 2025-11-05 16:03:48.359 [INFO][4750] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-xb5df" WorkloadEndpoint="ip--172--31--17--172-k8s-coredns--668d6bf9bc--xb5df-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--172-k8s-coredns--668d6bf9bc--xb5df-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ae020f58-18ae-4ec2-9ce4-9d559dab8fbd", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-172", ContainerID:"f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7", Pod:"coredns-668d6bf9bc-xb5df", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8deb9d998ff", MAC:"72:0f:68:25:8d:7f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:48.393463 containerd[1979]: 2025-11-05 16:03:48.383 [INFO][4750] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-xb5df" WorkloadEndpoint="ip--172--31--17--172-k8s-coredns--668d6bf9bc--xb5df-eth0" Nov 5 16:03:48.474209 systemd-networkd[1567]: cali3ec9575ddba: Link UP Nov 5 16:03:48.475925 systemd-networkd[1567]: cali3ec9575ddba: Gained carrier Nov 5 16:03:48.516355 containerd[1979]: time="2025-11-05T16:03:48.515882939Z" level=info msg="connecting to shim fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e" address="unix:///run/containerd/s/9b6544890b13ce652a8f655a07ac8e7444dafca9bd81f7a645963ef01e53bc0f" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:48.540946 containerd[1979]: time="2025-11-05T16:03:48.539964651Z" level=info msg="connecting to shim a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6" address="unix:///run/containerd/s/3fdc4b45dbf097e0908d8e55687ab62c29519ac90b007731d1f2f78c0829e4a4" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:48.541508 containerd[1979]: 2025-11-05 16:03:44.637 [INFO][4665] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--172-k8s-calico--kube--controllers--74b589d999--5tfgh-eth0 calico-kube-controllers-74b589d999- calico-system 9aac16aa-0990-4e14-a1db-e5abd9a92505 856 0 2025-11-05 16:03:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74b589d999 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-17-172 calico-kube-controllers-74b589d999-5tfgh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3ec9575ddba [] [] }} ContainerID="f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10" Namespace="calico-system" Pod="calico-kube-controllers-74b589d999-5tfgh" WorkloadEndpoint="ip--172--31--17--172-k8s-calico--kube--controllers--74b589d999--5tfgh-" Nov 5 16:03:48.541508 containerd[1979]: 2025-11-05 16:03:44.638 [INFO][4665] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10" Namespace="calico-system" Pod="calico-kube-controllers-74b589d999-5tfgh" WorkloadEndpoint="ip--172--31--17--172-k8s-calico--kube--controllers--74b589d999--5tfgh-eth0" Nov 5 16:03:48.541508 containerd[1979]: 2025-11-05 16:03:47.983 [INFO][4698] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10" HandleID="k8s-pod-network.f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10" Workload="ip--172--31--17--172-k8s-calico--kube--controllers--74b589d999--5tfgh-eth0" Nov 5 16:03:48.541508 containerd[1979]: 2025-11-05 16:03:47.983 [INFO][4698] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10" HandleID="k8s-pod-network.f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10" Workload="ip--172--31--17--172-k8s-calico--kube--controllers--74b589d999--5tfgh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f91e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-172", "pod":"calico-kube-controllers-74b589d999-5tfgh", "timestamp":"2025-11-05 16:03:47.98349557 +0000 UTC"}, Hostname:"ip-172-31-17-172", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:03:48.541508 containerd[1979]: 2025-11-05 16:03:47.983 [INFO][4698] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:03:48.541508 containerd[1979]: 2025-11-05 16:03:48.331 [INFO][4698] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:03:48.541508 containerd[1979]: 2025-11-05 16:03:48.331 [INFO][4698] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-172' Nov 5 16:03:48.541508 containerd[1979]: 2025-11-05 16:03:48.371 [INFO][4698] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10" host="ip-172-31-17-172" Nov 5 16:03:48.541508 containerd[1979]: 2025-11-05 16:03:48.390 [INFO][4698] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-172" Nov 5 16:03:48.541508 containerd[1979]: 2025-11-05 16:03:48.399 [INFO][4698] ipam/ipam.go 511: Trying affinity for 192.168.62.64/26 host="ip-172-31-17-172" Nov 5 16:03:48.541508 containerd[1979]: 2025-11-05 16:03:48.402 [INFO][4698] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.64/26 host="ip-172-31-17-172" Nov 5 16:03:48.541508 containerd[1979]: 2025-11-05 16:03:48.405 [INFO][4698] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.64/26 host="ip-172-31-17-172" Nov 5 16:03:48.541508 containerd[1979]: 2025-11-05 16:03:48.405 [INFO][4698] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.62.64/26 handle="k8s-pod-network.f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10" host="ip-172-31-17-172" Nov 5 16:03:48.541508 containerd[1979]: 2025-11-05 16:03:48.410 [INFO][4698] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10 Nov 5 16:03:48.541508 containerd[1979]: 2025-11-05 16:03:48.419 [INFO][4698] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.62.64/26 handle="k8s-pod-network.f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10" host="ip-172-31-17-172" Nov 5 16:03:48.541508 containerd[1979]: 2025-11-05 16:03:48.428 [INFO][4698] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.62.68/26] block=192.168.62.64/26 handle="k8s-pod-network.f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10" host="ip-172-31-17-172" Nov 5 16:03:48.541508 containerd[1979]: 2025-11-05 16:03:48.428 [INFO][4698] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.68/26] handle="k8s-pod-network.f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10" host="ip-172-31-17-172" Nov 5 16:03:48.541508 containerd[1979]: 2025-11-05 16:03:48.428 [INFO][4698] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:03:48.541508 containerd[1979]: 2025-11-05 16:03:48.429 [INFO][4698] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.62.68/26] IPv6=[] ContainerID="f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10" HandleID="k8s-pod-network.f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10" Workload="ip--172--31--17--172-k8s-calico--kube--controllers--74b589d999--5tfgh-eth0" Nov 5 16:03:48.542640 containerd[1979]: 2025-11-05 16:03:48.451 [INFO][4665] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10" Namespace="calico-system" Pod="calico-kube-controllers-74b589d999-5tfgh" WorkloadEndpoint="ip--172--31--17--172-k8s-calico--kube--controllers--74b589d999--5tfgh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--172-k8s-calico--kube--controllers--74b589d999--5tfgh-eth0", GenerateName:"calico-kube-controllers-74b589d999-", Namespace:"calico-system", SelfLink:"", UID:"9aac16aa-0990-4e14-a1db-e5abd9a92505", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74b589d999", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-172", ContainerID:"", Pod:"calico-kube-controllers-74b589d999-5tfgh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3ec9575ddba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:48.542640 containerd[1979]: 2025-11-05 16:03:48.452 [INFO][4665] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.68/32] ContainerID="f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10" Namespace="calico-system" Pod="calico-kube-controllers-74b589d999-5tfgh" WorkloadEndpoint="ip--172--31--17--172-k8s-calico--kube--controllers--74b589d999--5tfgh-eth0" Nov 5 16:03:48.542640 containerd[1979]: 2025-11-05 16:03:48.452 [INFO][4665] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3ec9575ddba ContainerID="f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10" Namespace="calico-system" Pod="calico-kube-controllers-74b589d999-5tfgh" WorkloadEndpoint="ip--172--31--17--172-k8s-calico--kube--controllers--74b589d999--5tfgh-eth0" Nov 5 16:03:48.542640 containerd[1979]: 2025-11-05 16:03:48.490 [INFO][4665] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10" Namespace="calico-system" Pod="calico-kube-controllers-74b589d999-5tfgh" WorkloadEndpoint="ip--172--31--17--172-k8s-calico--kube--controllers--74b589d999--5tfgh-eth0" Nov 5 16:03:48.542640 containerd[1979]: 2025-11-05 16:03:48.507 [INFO][4665] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10" Namespace="calico-system" Pod="calico-kube-controllers-74b589d999-5tfgh" WorkloadEndpoint="ip--172--31--17--172-k8s-calico--kube--controllers--74b589d999--5tfgh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--172-k8s-calico--kube--controllers--74b589d999--5tfgh-eth0", GenerateName:"calico-kube-controllers-74b589d999-", Namespace:"calico-system", SelfLink:"", UID:"9aac16aa-0990-4e14-a1db-e5abd9a92505", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74b589d999", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-172", ContainerID:"f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10", Pod:"calico-kube-controllers-74b589d999-5tfgh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3ec9575ddba", MAC:"6a:aa:36:73:52:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:48.542640 containerd[1979]: 2025-11-05 16:03:48.536 [INFO][4665] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10" Namespace="calico-system" Pod="calico-kube-controllers-74b589d999-5tfgh" WorkloadEndpoint="ip--172--31--17--172-k8s-calico--kube--controllers--74b589d999--5tfgh-eth0" Nov 5 16:03:48.545563 containerd[1979]: time="2025-11-05T16:03:48.543533702Z" level=info msg="connecting to shim f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7" address="unix:///run/containerd/s/3d7e7ad010f52bdada876efa46174a6589f455841dbe485f8b60b5c071e31f4d" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:48.627069 systemd-networkd[1567]: cali708c15bf9a3: Link UP Nov 5 16:03:48.629076 systemd-networkd[1567]: cali708c15bf9a3: Gained carrier Nov 5 16:03:48.684435 systemd[1]: Started cri-containerd-f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7.scope - libcontainer container f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7. Nov 5 16:03:48.692593 systemd[1]: Started cri-containerd-fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e.scope - libcontainer container fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e. Nov 5 16:03:48.709636 containerd[1979]: time="2025-11-05T16:03:48.709582520Z" level=info msg="connecting to shim f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10" address="unix:///run/containerd/s/926c4b3b83ac4e8337f3d601084a8d3f8c7fd95af6204731fed2833d683ff74c" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:48.710417 systemd[1]: Started cri-containerd-a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6.scope - libcontainer container a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6. Nov 5 16:03:48.723173 containerd[1979]: 2025-11-05 16:03:45.367 [INFO][4744] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--172-k8s-csi--node--driver--dsvvp-eth0 csi-node-driver- calico-system 6e11cbb7-6c81-460e-9d02-0e852cdd8f6c 734 0 2025-11-05 16:03:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-17-172 csi-node-driver-dsvvp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali708c15bf9a3 [] [] }} ContainerID="40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d" Namespace="calico-system" Pod="csi-node-driver-dsvvp" WorkloadEndpoint="ip--172--31--17--172-k8s-csi--node--driver--dsvvp-" Nov 5 16:03:48.723173 containerd[1979]: 2025-11-05 16:03:45.368 [INFO][4744] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d" Namespace="calico-system" Pod="csi-node-driver-dsvvp" WorkloadEndpoint="ip--172--31--17--172-k8s-csi--node--driver--dsvvp-eth0" Nov 5 16:03:48.723173 containerd[1979]: 2025-11-05 16:03:47.981 [INFO][4819] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d" HandleID="k8s-pod-network.40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d" Workload="ip--172--31--17--172-k8s-csi--node--driver--dsvvp-eth0" Nov 5 16:03:48.723173 containerd[1979]: 2025-11-05 16:03:47.983 [INFO][4819] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d" HandleID="k8s-pod-network.40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d" Workload="ip--172--31--17--172-k8s-csi--node--driver--dsvvp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003054c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-172", "pod":"csi-node-driver-dsvvp", "timestamp":"2025-11-05 16:03:47.981951879 +0000 UTC"}, Hostname:"ip-172-31-17-172", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:03:48.723173 containerd[1979]: 2025-11-05 16:03:47.983 [INFO][4819] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:03:48.723173 containerd[1979]: 2025-11-05 16:03:48.429 [INFO][4819] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:03:48.723173 containerd[1979]: 2025-11-05 16:03:48.429 [INFO][4819] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-172' Nov 5 16:03:48.723173 containerd[1979]: 2025-11-05 16:03:48.482 [INFO][4819] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d" host="ip-172-31-17-172" Nov 5 16:03:48.723173 containerd[1979]: 2025-11-05 16:03:48.500 [INFO][4819] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-172" Nov 5 16:03:48.723173 containerd[1979]: 2025-11-05 16:03:48.515 [INFO][4819] ipam/ipam.go 511: Trying affinity for 192.168.62.64/26 host="ip-172-31-17-172" Nov 5 16:03:48.723173 containerd[1979]: 2025-11-05 16:03:48.522 [INFO][4819] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.64/26 host="ip-172-31-17-172" Nov 5 16:03:48.723173 containerd[1979]: 2025-11-05 16:03:48.541 [INFO][4819] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.64/26 host="ip-172-31-17-172" Nov 5 16:03:48.723173 containerd[1979]: 2025-11-05 16:03:48.544 [INFO][4819] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.62.64/26 handle="k8s-pod-network.40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d" host="ip-172-31-17-172" Nov 5 16:03:48.723173 containerd[1979]: 2025-11-05 16:03:48.550 [INFO][4819] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d Nov 5 16:03:48.723173 containerd[1979]: 2025-11-05 16:03:48.585 [INFO][4819] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.62.64/26 handle="k8s-pod-network.40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d" host="ip-172-31-17-172" Nov 5 16:03:48.723173 containerd[1979]: 2025-11-05 16:03:48.607 [INFO][4819] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.62.69/26] block=192.168.62.64/26 handle="k8s-pod-network.40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d" host="ip-172-31-17-172" Nov 5 16:03:48.723173 containerd[1979]: 2025-11-05 16:03:48.607 [INFO][4819] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.69/26] handle="k8s-pod-network.40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d" host="ip-172-31-17-172" Nov 5 16:03:48.723173 containerd[1979]: 2025-11-05 16:03:48.607 [INFO][4819] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:03:48.723173 containerd[1979]: 2025-11-05 16:03:48.607 [INFO][4819] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.62.69/26] IPv6=[] ContainerID="40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d" HandleID="k8s-pod-network.40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d" Workload="ip--172--31--17--172-k8s-csi--node--driver--dsvvp-eth0" Nov 5 16:03:48.726477 containerd[1979]: 2025-11-05 16:03:48.619 [INFO][4744] cni-plugin/k8s.go 418: Populated endpoint ContainerID="40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d" Namespace="calico-system" Pod="csi-node-driver-dsvvp" WorkloadEndpoint="ip--172--31--17--172-k8s-csi--node--driver--dsvvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--172-k8s-csi--node--driver--dsvvp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6e11cbb7-6c81-460e-9d02-0e852cdd8f6c", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-172", ContainerID:"", Pod:"csi-node-driver-dsvvp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali708c15bf9a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:48.726477 containerd[1979]: 2025-11-05 16:03:48.621 [INFO][4744] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.69/32] ContainerID="40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d" Namespace="calico-system" Pod="csi-node-driver-dsvvp" WorkloadEndpoint="ip--172--31--17--172-k8s-csi--node--driver--dsvvp-eth0" Nov 5 16:03:48.726477 containerd[1979]: 2025-11-05 16:03:48.621 [INFO][4744] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali708c15bf9a3 ContainerID="40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d" Namespace="calico-system" Pod="csi-node-driver-dsvvp" WorkloadEndpoint="ip--172--31--17--172-k8s-csi--node--driver--dsvvp-eth0" Nov 5 16:03:48.726477 containerd[1979]: 2025-11-05 16:03:48.630 [INFO][4744] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d" Namespace="calico-system" Pod="csi-node-driver-dsvvp" WorkloadEndpoint="ip--172--31--17--172-k8s-csi--node--driver--dsvvp-eth0" Nov 5 16:03:48.726477 containerd[1979]: 2025-11-05 16:03:48.637 [INFO][4744] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d" Namespace="calico-system" Pod="csi-node-driver-dsvvp" WorkloadEndpoint="ip--172--31--17--172-k8s-csi--node--driver--dsvvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--172-k8s-csi--node--driver--dsvvp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6e11cbb7-6c81-460e-9d02-0e852cdd8f6c", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-172", ContainerID:"40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d", Pod:"csi-node-driver-dsvvp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali708c15bf9a3", MAC:"d2:a1:7f:01:49:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:48.726477 containerd[1979]: 2025-11-05 16:03:48.692 [INFO][4744] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d" Namespace="calico-system" Pod="csi-node-driver-dsvvp" WorkloadEndpoint="ip--172--31--17--172-k8s-csi--node--driver--dsvvp-eth0" Nov 5 16:03:48.795548 systemd[1]: Started cri-containerd-f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10.scope - libcontainer container f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10. Nov 5 16:03:48.841756 containerd[1979]: time="2025-11-05T16:03:48.840734457Z" level=info msg="connecting to shim 40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d" address="unix:///run/containerd/s/e8ffaf13e0e10df0e62e0674ace95dd23f4f5f4eb073687e195712a09f9915b0" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:48.881743 systemd-networkd[1567]: calidf874e9f7df: Link UP Nov 5 16:03:48.885801 systemd-networkd[1567]: calidf874e9f7df: Gained carrier Nov 5 16:03:48.960698 containerd[1979]: time="2025-11-05T16:03:48.960650635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df446974d-5p6n9,Uid:97bb7728-1652-4f73-a3fd-5b00174bed72,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a7b21a0d924c6622fdef28127a06ce73e3e90e4a80301c3a2944629f5383d1c6\"" Nov 5 16:03:48.974543 systemd[1]: Started cri-containerd-40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d.scope - libcontainer container 40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d. Nov 5 16:03:48.995392 containerd[1979]: 2025-11-05 16:03:44.637 [INFO][4684] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--172-k8s-whisker--c574dd99b--btm8k-eth0 whisker-c574dd99b- calico-system 5ed90129-d345-48e7-a043-180d8e15dcce 936 0 2025-11-05 16:03:43 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:c574dd99b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-17-172 whisker-c574dd99b-btm8k eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidf874e9f7df [] [] }} ContainerID="340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4" Namespace="calico-system" Pod="whisker-c574dd99b-btm8k" WorkloadEndpoint="ip--172--31--17--172-k8s-whisker--c574dd99b--btm8k-" Nov 5 16:03:48.995392 containerd[1979]: 2025-11-05 16:03:44.639 [INFO][4684] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4" Namespace="calico-system" Pod="whisker-c574dd99b-btm8k" WorkloadEndpoint="ip--172--31--17--172-k8s-whisker--c574dd99b--btm8k-eth0" Nov 5 16:03:48.995392 containerd[1979]: 2025-11-05 16:03:47.982 [INFO][4702] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4" HandleID="k8s-pod-network.340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4" Workload="ip--172--31--17--172-k8s-whisker--c574dd99b--btm8k-eth0" Nov 5 16:03:48.995392 containerd[1979]: 2025-11-05 16:03:47.983 [INFO][4702] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4" HandleID="k8s-pod-network.340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4" Workload="ip--172--31--17--172-k8s-whisker--c574dd99b--btm8k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030b2a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-172", "pod":"whisker-c574dd99b-btm8k", "timestamp":"2025-11-05 16:03:47.982215628 +0000 UTC"}, Hostname:"ip-172-31-17-172", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:03:48.995392 containerd[1979]: 2025-11-05 16:03:47.983 [INFO][4702] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:03:48.995392 containerd[1979]: 2025-11-05 16:03:48.612 [INFO][4702] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:03:48.995392 containerd[1979]: 2025-11-05 16:03:48.612 [INFO][4702] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-172' Nov 5 16:03:48.995392 containerd[1979]: 2025-11-05 16:03:48.653 [INFO][4702] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4" host="ip-172-31-17-172" Nov 5 16:03:48.995392 containerd[1979]: 2025-11-05 16:03:48.673 [INFO][4702] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-172" Nov 5 16:03:48.995392 containerd[1979]: 2025-11-05 16:03:48.688 [INFO][4702] ipam/ipam.go 511: Trying affinity for 192.168.62.64/26 host="ip-172-31-17-172" Nov 5 16:03:48.995392 containerd[1979]: 2025-11-05 16:03:48.695 [INFO][4702] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.64/26 host="ip-172-31-17-172" Nov 5 16:03:48.995392 containerd[1979]: 2025-11-05 16:03:48.703 [INFO][4702] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.64/26 host="ip-172-31-17-172" Nov 5 16:03:48.995392 containerd[1979]: 2025-11-05 16:03:48.703 [INFO][4702] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.62.64/26 handle="k8s-pod-network.340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4" host="ip-172-31-17-172" Nov 5 16:03:48.995392 containerd[1979]: 2025-11-05 16:03:48.707 [INFO][4702] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4 Nov 5 16:03:48.995392 containerd[1979]: 2025-11-05 16:03:48.747 [INFO][4702] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.62.64/26 handle="k8s-pod-network.340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4" host="ip-172-31-17-172" Nov 5 16:03:48.995392 containerd[1979]: 2025-11-05 16:03:48.798 [INFO][4702] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.62.70/26] block=192.168.62.64/26 handle="k8s-pod-network.340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4" host="ip-172-31-17-172" Nov 5 16:03:48.995392 containerd[1979]: 2025-11-05 16:03:48.801 [INFO][4702] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.70/26] handle="k8s-pod-network.340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4" host="ip-172-31-17-172" Nov 5 16:03:48.995392 containerd[1979]: 2025-11-05 16:03:48.801 [INFO][4702] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:03:48.995392 containerd[1979]: 2025-11-05 16:03:48.801 [INFO][4702] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.62.70/26] IPv6=[] ContainerID="340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4" HandleID="k8s-pod-network.340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4" Workload="ip--172--31--17--172-k8s-whisker--c574dd99b--btm8k-eth0" Nov 5 16:03:48.997581 containerd[1979]: 2025-11-05 16:03:48.847 [INFO][4684] cni-plugin/k8s.go 418: Populated endpoint ContainerID="340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4" Namespace="calico-system" Pod="whisker-c574dd99b-btm8k" WorkloadEndpoint="ip--172--31--17--172-k8s-whisker--c574dd99b--btm8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--172-k8s-whisker--c574dd99b--btm8k-eth0", GenerateName:"whisker-c574dd99b-", Namespace:"calico-system", SelfLink:"", UID:"5ed90129-d345-48e7-a043-180d8e15dcce", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c574dd99b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-172", ContainerID:"", Pod:"whisker-c574dd99b-btm8k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.62.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidf874e9f7df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:48.997581 containerd[1979]: 2025-11-05 16:03:48.849 [INFO][4684] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.70/32] ContainerID="340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4" Namespace="calico-system" Pod="whisker-c574dd99b-btm8k" WorkloadEndpoint="ip--172--31--17--172-k8s-whisker--c574dd99b--btm8k-eth0" Nov 5 16:03:48.997581 containerd[1979]: 2025-11-05 16:03:48.849 [INFO][4684] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf874e9f7df ContainerID="340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4" Namespace="calico-system" Pod="whisker-c574dd99b-btm8k" WorkloadEndpoint="ip--172--31--17--172-k8s-whisker--c574dd99b--btm8k-eth0" Nov 5 16:03:48.997581 containerd[1979]: 2025-11-05 16:03:48.907 [INFO][4684] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4" Namespace="calico-system" Pod="whisker-c574dd99b-btm8k" WorkloadEndpoint="ip--172--31--17--172-k8s-whisker--c574dd99b--btm8k-eth0" Nov 5 16:03:48.997581 containerd[1979]: 2025-11-05 16:03:48.914 [INFO][4684] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4" Namespace="calico-system" Pod="whisker-c574dd99b-btm8k" WorkloadEndpoint="ip--172--31--17--172-k8s-whisker--c574dd99b--btm8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--172-k8s-whisker--c574dd99b--btm8k-eth0", GenerateName:"whisker-c574dd99b-", Namespace:"calico-system", SelfLink:"", UID:"5ed90129-d345-48e7-a043-180d8e15dcce", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c574dd99b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-172", ContainerID:"340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4", Pod:"whisker-c574dd99b-btm8k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.62.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidf874e9f7df", MAC:"12:04:88:bf:85:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:48.997581 containerd[1979]: 2025-11-05 16:03:48.989 [INFO][4684] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4" Namespace="calico-system" Pod="whisker-c574dd99b-btm8k" WorkloadEndpoint="ip--172--31--17--172-k8s-whisker--c574dd99b--btm8k-eth0" Nov 5 16:03:49.012588 containerd[1979]: time="2025-11-05T16:03:49.012356278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:03:49.089200 systemd-networkd[1567]: calid0881933d39: Link UP Nov 5 16:03:49.099630 systemd-networkd[1567]: calid0881933d39: Gained carrier Nov 5 16:03:49.184123 containerd[1979]: time="2025-11-05T16:03:49.183660177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b589d999-5tfgh,Uid:9aac16aa-0990-4e14-a1db-e5abd9a92505,Namespace:calico-system,Attempt:0,} returns sandbox id \"f7d00750a14ec037afc080398413e3fa620b76af4bb45ecd89548f3afdbd3a10\"" Nov 5 16:03:49.204959 containerd[1979]: 2025-11-05 16:03:45.304 [INFO][4740] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--172-k8s-goldmane--666569f655--xbcp7-eth0 goldmane-666569f655- calico-system 8831874b-2bb6-46c1-a079-c45a246f51e1 851 0 2025-11-05 16:03:15 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-17-172 goldmane-666569f655-xbcp7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid0881933d39 [] [] }} ContainerID="d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b" Namespace="calico-system" Pod="goldmane-666569f655-xbcp7" WorkloadEndpoint="ip--172--31--17--172-k8s-goldmane--666569f655--xbcp7-" Nov 5 16:03:49.204959 containerd[1979]: 2025-11-05 16:03:45.306 [INFO][4740] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b" Namespace="calico-system" Pod="goldmane-666569f655-xbcp7" WorkloadEndpoint="ip--172--31--17--172-k8s-goldmane--666569f655--xbcp7-eth0" Nov 5 16:03:49.204959 containerd[1979]: 2025-11-05 16:03:47.981 [INFO][4802] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b" HandleID="k8s-pod-network.d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b" Workload="ip--172--31--17--172-k8s-goldmane--666569f655--xbcp7-eth0" Nov 5 16:03:49.204959 containerd[1979]: 2025-11-05 16:03:47.983 [INFO][4802] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b" HandleID="k8s-pod-network.d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b" Workload="ip--172--31--17--172-k8s-goldmane--666569f655--xbcp7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00010a560), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-172", "pod":"goldmane-666569f655-xbcp7", "timestamp":"2025-11-05 16:03:47.981923125 +0000 UTC"}, Hostname:"ip-172-31-17-172", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:03:49.204959 containerd[1979]: 2025-11-05 16:03:47.984 [INFO][4802] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:03:49.204959 containerd[1979]: 2025-11-05 16:03:48.803 [INFO][4802] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:03:49.204959 containerd[1979]: 2025-11-05 16:03:48.804 [INFO][4802] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-172' Nov 5 16:03:49.204959 containerd[1979]: 2025-11-05 16:03:48.890 [INFO][4802] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b" host="ip-172-31-17-172" Nov 5 16:03:49.204959 containerd[1979]: 2025-11-05 16:03:48.942 [INFO][4802] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-172" Nov 5 16:03:49.204959 containerd[1979]: 2025-11-05 16:03:48.972 [INFO][4802] ipam/ipam.go 511: Trying affinity for 192.168.62.64/26 host="ip-172-31-17-172" Nov 5 16:03:49.204959 containerd[1979]: 2025-11-05 16:03:48.985 [INFO][4802] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.64/26 host="ip-172-31-17-172" Nov 5 16:03:49.204959 containerd[1979]: 2025-11-05 16:03:49.007 [INFO][4802] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.64/26 host="ip-172-31-17-172" Nov 5 16:03:49.204959 containerd[1979]: 2025-11-05 16:03:49.008 [INFO][4802] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.62.64/26 handle="k8s-pod-network.d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b" host="ip-172-31-17-172" Nov 5 16:03:49.204959 containerd[1979]: 2025-11-05 16:03:49.013 [INFO][4802] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b Nov 5 16:03:49.204959 containerd[1979]: 2025-11-05 16:03:49.036 [INFO][4802] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.62.64/26 handle="k8s-pod-network.d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b" host="ip-172-31-17-172" Nov 5 16:03:49.204959 containerd[1979]: 2025-11-05 16:03:49.062 [INFO][4802] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.62.71/26] block=192.168.62.64/26 handle="k8s-pod-network.d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b" host="ip-172-31-17-172" Nov 5 16:03:49.204959 containerd[1979]: 2025-11-05 16:03:49.062 [INFO][4802] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.71/26] handle="k8s-pod-network.d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b" host="ip-172-31-17-172" Nov 5 16:03:49.204959 containerd[1979]: 2025-11-05 16:03:49.062 [INFO][4802] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:03:49.204959 containerd[1979]: 2025-11-05 16:03:49.062 [INFO][4802] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.62.71/26] IPv6=[] ContainerID="d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b" HandleID="k8s-pod-network.d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b" Workload="ip--172--31--17--172-k8s-goldmane--666569f655--xbcp7-eth0" Nov 5 16:03:49.207340 containerd[1979]: 2025-11-05 16:03:49.077 [INFO][4740] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b" Namespace="calico-system" Pod="goldmane-666569f655-xbcp7" WorkloadEndpoint="ip--172--31--17--172-k8s-goldmane--666569f655--xbcp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--172-k8s-goldmane--666569f655--xbcp7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8831874b-2bb6-46c1-a079-c45a246f51e1", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-172", ContainerID:"", Pod:"goldmane-666569f655-xbcp7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.62.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid0881933d39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:49.207340 containerd[1979]: 2025-11-05 16:03:49.077 [INFO][4740] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.71/32] ContainerID="d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b" Namespace="calico-system" Pod="goldmane-666569f655-xbcp7" WorkloadEndpoint="ip--172--31--17--172-k8s-goldmane--666569f655--xbcp7-eth0" Nov 5 16:03:49.207340 containerd[1979]: 2025-11-05 16:03:49.077 [INFO][4740] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0881933d39 ContainerID="d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b" Namespace="calico-system" Pod="goldmane-666569f655-xbcp7" WorkloadEndpoint="ip--172--31--17--172-k8s-goldmane--666569f655--xbcp7-eth0" Nov 5 16:03:49.207340 containerd[1979]: 2025-11-05 16:03:49.145 [INFO][4740] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b" Namespace="calico-system" Pod="goldmane-666569f655-xbcp7" WorkloadEndpoint="ip--172--31--17--172-k8s-goldmane--666569f655--xbcp7-eth0" Nov 5 16:03:49.207340 containerd[1979]: 2025-11-05 16:03:49.151 [INFO][4740] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b" Namespace="calico-system" Pod="goldmane-666569f655-xbcp7" WorkloadEndpoint="ip--172--31--17--172-k8s-goldmane--666569f655--xbcp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--172-k8s-goldmane--666569f655--xbcp7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8831874b-2bb6-46c1-a079-c45a246f51e1", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-172", ContainerID:"d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b", Pod:"goldmane-666569f655-xbcp7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.62.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid0881933d39", MAC:"0a:02:e6:af:db:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:49.207340 containerd[1979]: 2025-11-05 16:03:49.189 [INFO][4740] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b" Namespace="calico-system" Pod="goldmane-666569f655-xbcp7" WorkloadEndpoint="ip--172--31--17--172-k8s-goldmane--666569f655--xbcp7-eth0" Nov 5 16:03:49.239449 containerd[1979]: time="2025-11-05T16:03:49.239400415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-snx9z,Uid:68251c55-f958-4ce6-8d9b-1ec5531fcb53,Namespace:kube-system,Attempt:0,} returns sandbox id \"fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e\"" Nov 5 16:03:49.273748 containerd[1979]: time="2025-11-05T16:03:49.272990036Z" level=info msg="CreateContainer within sandbox \"fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 16:03:49.290636 containerd[1979]: time="2025-11-05T16:03:49.289808511Z" level=info msg="connecting to shim 340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4" address="unix:///run/containerd/s/760fdcdee9b2e4ef71d7c197f41afa6662f9e38e9d4103a78d269ac1628e784b" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:49.298424 containerd[1979]: time="2025-11-05T16:03:49.298386287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xb5df,Uid:ae020f58-18ae-4ec2-9ce4-9d559dab8fbd,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7\"" Nov 5 16:03:49.347633 systemd-networkd[1567]: cali935e24c20c0: Link UP Nov 5 16:03:49.353229 systemd-networkd[1567]: cali935e24c20c0: Gained carrier Nov 5 16:03:49.362456 systemd-networkd[1567]: cali69bcc6e0fe8: Gained IPv6LL Nov 5 16:03:49.381104 containerd[1979]: time="2025-11-05T16:03:49.374080095Z" level=info msg="CreateContainer within sandbox \"f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 16:03:49.407702 containerd[1979]: time="2025-11-05T16:03:49.407658082Z" level=info msg="connecting to shim d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b" address="unix:///run/containerd/s/07cab1cc71f288667b18d4ff2b368712e3a5e25604b10f986beb86ab2ac3a232" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:49.411729 containerd[1979]: time="2025-11-05T16:03:49.408327286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dsvvp,Uid:6e11cbb7-6c81-460e-9d02-0e852cdd8f6c,Namespace:calico-system,Attempt:0,} returns sandbox id \"40a0af6ffadf80a8e45908f40f92909fb1f2ff9ed798b21ab9358cee257fb96d\"" Nov 5 16:03:49.416797 containerd[1979]: 2025-11-05 16:03:48.289 [INFO][4872] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--172-k8s-calico--apiserver--6df446974d--wz89l-eth0 calico-apiserver-6df446974d- calico-apiserver 7a4ffcd2-c3d0-43ff-8d92-50435ddcecef 854 0 2025-11-05 16:03:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6df446974d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-172 calico-apiserver-6df446974d-wz89l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali935e24c20c0 [] [] }} ContainerID="e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9" Namespace="calico-apiserver" Pod="calico-apiserver-6df446974d-wz89l" WorkloadEndpoint="ip--172--31--17--172-k8s-calico--apiserver--6df446974d--wz89l-" Nov 5 16:03:49.416797 containerd[1979]: 2025-11-05 16:03:48.289 [INFO][4872] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9" Namespace="calico-apiserver" Pod="calico-apiserver-6df446974d-wz89l" WorkloadEndpoint="ip--172--31--17--172-k8s-calico--apiserver--6df446974d--wz89l-eth0" Nov 5 16:03:49.416797 containerd[1979]: 2025-11-05 16:03:48.371 [INFO][4893] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9" HandleID="k8s-pod-network.e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9" Workload="ip--172--31--17--172-k8s-calico--apiserver--6df446974d--wz89l-eth0" Nov 5 16:03:49.416797 containerd[1979]: 2025-11-05 16:03:48.371 [INFO][4893] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9" HandleID="k8s-pod-network.e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9" Workload="ip--172--31--17--172-k8s-calico--apiserver--6df446974d--wz89l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e150), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-17-172", "pod":"calico-apiserver-6df446974d-wz89l", "timestamp":"2025-11-05 16:03:48.371644465 +0000 UTC"}, Hostname:"ip-172-31-17-172", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:03:49.416797 containerd[1979]: 2025-11-05 16:03:48.371 [INFO][4893] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:03:49.416797 containerd[1979]: 2025-11-05 16:03:49.062 [INFO][4893] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:03:49.416797 containerd[1979]: 2025-11-05 16:03:49.063 [INFO][4893] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-172' Nov 5 16:03:49.416797 containerd[1979]: 2025-11-05 16:03:49.121 [INFO][4893] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9" host="ip-172-31-17-172" Nov 5 16:03:49.416797 containerd[1979]: 2025-11-05 16:03:49.165 [INFO][4893] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-172" Nov 5 16:03:49.416797 containerd[1979]: 2025-11-05 16:03:49.206 [INFO][4893] ipam/ipam.go 511: Trying affinity for 192.168.62.64/26 host="ip-172-31-17-172" Nov 5 16:03:49.416797 containerd[1979]: 2025-11-05 16:03:49.211 [INFO][4893] ipam/ipam.go 158: Attempting to load block cidr=192.168.62.64/26 host="ip-172-31-17-172" Nov 5 16:03:49.416797 containerd[1979]: 2025-11-05 16:03:49.232 [INFO][4893] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.62.64/26 host="ip-172-31-17-172" Nov 5 16:03:49.416797 containerd[1979]: 2025-11-05 16:03:49.232 [INFO][4893] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.62.64/26 handle="k8s-pod-network.e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9" host="ip-172-31-17-172" Nov 5 16:03:49.416797 containerd[1979]: 2025-11-05 16:03:49.240 [INFO][4893] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9 Nov 5 16:03:49.416797 containerd[1979]: 2025-11-05 16:03:49.262 [INFO][4893] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.62.64/26 handle="k8s-pod-network.e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9" host="ip-172-31-17-172" Nov 5 16:03:49.416797 containerd[1979]: 2025-11-05 16:03:49.297 [INFO][4893] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.62.72/26] block=192.168.62.64/26 handle="k8s-pod-network.e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9" host="ip-172-31-17-172" Nov 5 16:03:49.416797 containerd[1979]: 2025-11-05 16:03:49.299 [INFO][4893] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.62.72/26] handle="k8s-pod-network.e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9" host="ip-172-31-17-172" Nov 5 16:03:49.416797 containerd[1979]: 2025-11-05 16:03:49.299 [INFO][4893] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:03:49.416797 containerd[1979]: 2025-11-05 16:03:49.299 [INFO][4893] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.62.72/26] IPv6=[] ContainerID="e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9" HandleID="k8s-pod-network.e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9" Workload="ip--172--31--17--172-k8s-calico--apiserver--6df446974d--wz89l-eth0" Nov 5 16:03:49.422465 containerd[1979]: 2025-11-05 16:03:49.315 [INFO][4872] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9" Namespace="calico-apiserver" Pod="calico-apiserver-6df446974d-wz89l" WorkloadEndpoint="ip--172--31--17--172-k8s-calico--apiserver--6df446974d--wz89l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--172-k8s-calico--apiserver--6df446974d--wz89l-eth0", GenerateName:"calico-apiserver-6df446974d-", Namespace:"calico-apiserver", SelfLink:"", UID:"7a4ffcd2-c3d0-43ff-8d92-50435ddcecef", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df446974d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-172", ContainerID:"", Pod:"calico-apiserver-6df446974d-wz89l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali935e24c20c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:49.422465 containerd[1979]: 2025-11-05 16:03:49.316 [INFO][4872] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.72/32] ContainerID="e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9" Namespace="calico-apiserver" Pod="calico-apiserver-6df446974d-wz89l" WorkloadEndpoint="ip--172--31--17--172-k8s-calico--apiserver--6df446974d--wz89l-eth0" Nov 5 16:03:49.422465 containerd[1979]: 2025-11-05 16:03:49.317 [INFO][4872] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali935e24c20c0 ContainerID="e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9" Namespace="calico-apiserver" Pod="calico-apiserver-6df446974d-wz89l" WorkloadEndpoint="ip--172--31--17--172-k8s-calico--apiserver--6df446974d--wz89l-eth0" Nov 5 16:03:49.422465 containerd[1979]: 2025-11-05 16:03:49.344 [INFO][4872] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9" Namespace="calico-apiserver" Pod="calico-apiserver-6df446974d-wz89l" WorkloadEndpoint="ip--172--31--17--172-k8s-calico--apiserver--6df446974d--wz89l-eth0" Nov 5 16:03:49.422465 containerd[1979]: 2025-11-05 16:03:49.347 [INFO][4872] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9" Namespace="calico-apiserver" Pod="calico-apiserver-6df446974d-wz89l" WorkloadEndpoint="ip--172--31--17--172-k8s-calico--apiserver--6df446974d--wz89l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--172-k8s-calico--apiserver--6df446974d--wz89l-eth0", GenerateName:"calico-apiserver-6df446974d-", Namespace:"calico-apiserver", SelfLink:"", UID:"7a4ffcd2-c3d0-43ff-8d92-50435ddcecef", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df446974d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-172", ContainerID:"e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9", Pod:"calico-apiserver-6df446974d-wz89l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali935e24c20c0", MAC:"3a:d0:c6:13:d7:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:49.422465 containerd[1979]: 2025-11-05 16:03:49.395 [INFO][4872] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9" Namespace="calico-apiserver" Pod="calico-apiserver-6df446974d-wz89l" WorkloadEndpoint="ip--172--31--17--172-k8s-calico--apiserver--6df446974d--wz89l-eth0" Nov 5 16:03:49.463365 systemd[1]: Started cri-containerd-340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4.scope - libcontainer container 340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4. Nov 5 16:03:49.478218 systemd[1]: Started cri-containerd-d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b.scope - libcontainer container d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b. Nov 5 16:03:49.509596 containerd[1979]: time="2025-11-05T16:03:49.508509604Z" level=info msg="connecting to shim e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9" address="unix:///run/containerd/s/2c70aa4fe9ab6a5c34a6d83f199b8d0eda9804a70672a660301d47797fdeb563" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:49.536536 containerd[1979]: time="2025-11-05T16:03:49.536337304Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:49.538750 containerd[1979]: time="2025-11-05T16:03:49.538711417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:03:49.539199 containerd[1979]: time="2025-11-05T16:03:49.539127164Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:03:49.548265 systemd-networkd[1567]: cali8deb9d998ff: Gained IPv6LL Nov 5 16:03:49.550885 kubelet[3273]: E1105 16:03:49.545780 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:03:49.557754 containerd[1979]: time="2025-11-05T16:03:49.556635807Z" level=info msg="Container 443ef175ec1cb84bd353add65556ba88069c3746a78a181b79b9517a58d8addf: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:03:49.560245 containerd[1979]: time="2025-11-05T16:03:49.560204349Z" level=info msg="Container 681386f812d6af756c876598154dad0d1d2c9597e54615f416eb5d1b9076874e: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:03:49.565977 kubelet[3273]: E1105 16:03:49.565919 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:03:49.568822 systemd[1]: Started cri-containerd-e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9.scope - libcontainer container e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9. Nov 5 16:03:49.570429 containerd[1979]: time="2025-11-05T16:03:49.569876792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 16:03:49.585906 kubelet[3273]: E1105 16:03:49.585496 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8d8tr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6df446974d-5p6n9_calico-apiserver(97bb7728-1652-4f73-a3fd-5b00174bed72): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:49.597363 kubelet[3273]: E1105 16:03:49.596670 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df446974d-5p6n9" podUID="97bb7728-1652-4f73-a3fd-5b00174bed72" Nov 5 16:03:49.623203 containerd[1979]: time="2025-11-05T16:03:49.620572267Z" level=info msg="CreateContainer within sandbox \"f5aa61798771be99c6c2a29afb4aea310fb1b25b66fdcb1c02d4f7fb534f98a7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"443ef175ec1cb84bd353add65556ba88069c3746a78a181b79b9517a58d8addf\"" Nov 5 16:03:49.628897 containerd[1979]: time="2025-11-05T16:03:49.628761860Z" level=info msg="StartContainer for \"443ef175ec1cb84bd353add65556ba88069c3746a78a181b79b9517a58d8addf\"" Nov 5 16:03:49.631811 containerd[1979]: time="2025-11-05T16:03:49.631740992Z" level=info msg="CreateContainer within sandbox \"fef3964fabbe12cb2f682a5332c778ebe8fedf1abd940ad470b3dec6cdcfa82e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"681386f812d6af756c876598154dad0d1d2c9597e54615f416eb5d1b9076874e\"" Nov 5 16:03:49.633631 containerd[1979]: time="2025-11-05T16:03:49.633551521Z" level=info msg="connecting to shim 443ef175ec1cb84bd353add65556ba88069c3746a78a181b79b9517a58d8addf" address="unix:///run/containerd/s/3d7e7ad010f52bdada876efa46174a6589f455841dbe485f8b60b5c071e31f4d" protocol=ttrpc version=3 Nov 5 16:03:49.639210 containerd[1979]: time="2025-11-05T16:03:49.639096531Z" level=info msg="StartContainer for \"681386f812d6af756c876598154dad0d1d2c9597e54615f416eb5d1b9076874e\"" Nov 5 16:03:49.644870 containerd[1979]: time="2025-11-05T16:03:49.644656234Z" level=info msg="connecting to shim 681386f812d6af756c876598154dad0d1d2c9597e54615f416eb5d1b9076874e" address="unix:///run/containerd/s/9b6544890b13ce652a8f655a07ac8e7444dafca9bd81f7a645963ef01e53bc0f" protocol=ttrpc version=3 Nov 5 16:03:49.674869 kubelet[3273]: E1105 16:03:49.672726 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df446974d-5p6n9" podUID="97bb7728-1652-4f73-a3fd-5b00174bed72" Nov 5 16:03:49.676482 systemd-networkd[1567]: cali82050a39d04: Gained IPv6LL Nov 5 16:03:49.692829 containerd[1979]: time="2025-11-05T16:03:49.692724479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xbcp7,Uid:8831874b-2bb6-46c1-a079-c45a246f51e1,Namespace:calico-system,Attempt:0,} returns sandbox id \"d533dfc1c6e1bfb01a4fea9a809bbf88dd960f084d34a4b3e753ef78105d4c4b\"" Nov 5 16:03:49.723348 systemd[1]: Started cri-containerd-681386f812d6af756c876598154dad0d1d2c9597e54615f416eb5d1b9076874e.scope - libcontainer container 681386f812d6af756c876598154dad0d1d2c9597e54615f416eb5d1b9076874e. Nov 5 16:03:49.758302 systemd[1]: Started cri-containerd-443ef175ec1cb84bd353add65556ba88069c3746a78a181b79b9517a58d8addf.scope - libcontainer container 443ef175ec1cb84bd353add65556ba88069c3746a78a181b79b9517a58d8addf. Nov 5 16:03:49.766620 containerd[1979]: time="2025-11-05T16:03:49.766568779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c574dd99b-btm8k,Uid:5ed90129-d345-48e7-a043-180d8e15dcce,Namespace:calico-system,Attempt:0,} returns sandbox id \"340a49609db2dfc6095305140817a9ea79f62e899ca9bdf92f699c61afda8ac4\"" Nov 5 16:03:49.798017 containerd[1979]: time="2025-11-05T16:03:49.797972278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df446974d-wz89l,Uid:7a4ffcd2-c3d0-43ff-8d92-50435ddcecef,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e6924841a2e58d2b2c01aecda2419dfa8f54fb3fba9a833595281a310614fea9\"" Nov 5 16:03:49.847706 containerd[1979]: time="2025-11-05T16:03:49.847651237Z" level=info msg="StartContainer for \"681386f812d6af756c876598154dad0d1d2c9597e54615f416eb5d1b9076874e\" returns successfully" Nov 5 16:03:49.855358 containerd[1979]: time="2025-11-05T16:03:49.855239502Z" level=info msg="StartContainer for \"443ef175ec1cb84bd353add65556ba88069c3746a78a181b79b9517a58d8addf\" returns successfully" Nov 5 16:03:49.925756 containerd[1979]: time="2025-11-05T16:03:49.925675634Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:49.927852 containerd[1979]: time="2025-11-05T16:03:49.927781050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 16:03:49.927852 containerd[1979]: time="2025-11-05T16:03:49.927788304Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 16:03:49.928204 kubelet[3273]: E1105 16:03:49.928132 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:03:49.928204 kubelet[3273]: E1105 16:03:49.928187 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:03:49.928716 containerd[1979]: time="2025-11-05T16:03:49.928569608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 16:03:49.930590 kubelet[3273]: E1105 16:03:49.930525 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk4bn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74b589d999-5tfgh_calico-system(9aac16aa-0990-4e14-a1db-e5abd9a92505): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:49.932134 kubelet[3273]: E1105 16:03:49.931864 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b589d999-5tfgh" podUID="9aac16aa-0990-4e14-a1db-e5abd9a92505" Nov 5 16:03:50.117510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2799372331.mount: Deactivated successfully. Nov 5 16:03:50.188973 systemd-networkd[1567]: cali3ec9575ddba: Gained IPv6LL Nov 5 16:03:50.464314 containerd[1979]: time="2025-11-05T16:03:50.464133751Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:50.466272 containerd[1979]: time="2025-11-05T16:03:50.466161094Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 16:03:50.466272 containerd[1979]: time="2025-11-05T16:03:50.466205653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 16:03:50.466485 kubelet[3273]: E1105 16:03:50.466443 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:03:50.466551 kubelet[3273]: E1105 16:03:50.466494 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:03:50.466801 kubelet[3273]: E1105 16:03:50.466694 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z48dn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dsvvp_calico-system(6e11cbb7-6c81-460e-9d02-0e852cdd8f6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:50.468282 containerd[1979]: time="2025-11-05T16:03:50.468230252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 16:03:50.700238 systemd-networkd[1567]: cali708c15bf9a3: Gained IPv6LL Nov 5 16:03:50.702072 systemd-networkd[1567]: calidf874e9f7df: Gained IPv6LL Nov 5 16:03:50.724713 kubelet[3273]: E1105 16:03:50.724592 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b589d999-5tfgh" podUID="9aac16aa-0990-4e14-a1db-e5abd9a92505" Nov 5 16:03:50.724713 kubelet[3273]: E1105 16:03:50.724592 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df446974d-5p6n9" podUID="97bb7728-1652-4f73-a3fd-5b00174bed72" Nov 5 16:03:50.748888 kubelet[3273]: I1105 16:03:50.745926 3273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-snx9z" podStartSLOduration=81.745907033 podStartE2EDuration="1m21.745907033s" podCreationTimestamp="2025-11-05 16:02:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:03:50.741086154 +0000 UTC m=+85.869710430" watchObservedRunningTime="2025-11-05 16:03:50.745907033 +0000 UTC m=+85.874531308" Nov 5 16:03:50.765476 systemd-networkd[1567]: calid0881933d39: Gained IPv6LL Nov 5 16:03:50.790405 kubelet[3273]: I1105 16:03:50.790348 3273 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xb5df" podStartSLOduration=81.790331295 podStartE2EDuration="1m21.790331295s" podCreationTimestamp="2025-11-05 16:02:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:03:50.772664933 +0000 UTC m=+85.901289210" watchObservedRunningTime="2025-11-05 16:03:50.790331295 +0000 UTC m=+85.918955571" Nov 5 16:03:50.883059 containerd[1979]: time="2025-11-05T16:03:50.882964568Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:50.885282 containerd[1979]: time="2025-11-05T16:03:50.885225833Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 16:03:50.885424 containerd[1979]: time="2025-11-05T16:03:50.885250945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 16:03:50.885654 kubelet[3273]: E1105 16:03:50.885597 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:03:50.885654 kubelet[3273]: E1105 16:03:50.885649 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:03:50.886063 containerd[1979]: time="2025-11-05T16:03:50.886006328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 16:03:50.886703 kubelet[3273]: E1105 16:03:50.886521 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f7c9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xbcp7_calico-system(8831874b-2bb6-46c1-a079-c45a246f51e1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:50.887793 kubelet[3273]: E1105 16:03:50.887729 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xbcp7" podUID="8831874b-2bb6-46c1-a079-c45a246f51e1" Nov 5 16:03:51.020217 systemd-networkd[1567]: cali935e24c20c0: Gained IPv6LL Nov 5 16:03:51.161374 containerd[1979]: time="2025-11-05T16:03:51.161192169Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:51.163502 containerd[1979]: time="2025-11-05T16:03:51.163445293Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 16:03:51.163612 containerd[1979]: time="2025-11-05T16:03:51.163552148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 16:03:51.163830 kubelet[3273]: E1105 16:03:51.163776 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:03:51.163917 kubelet[3273]: E1105 16:03:51.163841 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:03:51.164285 containerd[1979]: time="2025-11-05T16:03:51.164215769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:03:51.164746 kubelet[3273]: E1105 16:03:51.164590 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4f07a66d4bf44ddd85a38efffe012746,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-scrbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c574dd99b-btm8k_calico-system(5ed90129-d345-48e7-a043-180d8e15dcce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:51.547720 containerd[1979]: time="2025-11-05T16:03:51.547661583Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:51.550009 containerd[1979]: time="2025-11-05T16:03:51.549945021Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:03:51.550155 containerd[1979]: time="2025-11-05T16:03:51.550049900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:03:51.550291 kubelet[3273]: E1105 16:03:51.550234 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:03:51.550347 kubelet[3273]: E1105 16:03:51.550294 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:03:51.550669 kubelet[3273]: E1105 16:03:51.550541 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9pg2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6df446974d-wz89l_calico-apiserver(7a4ffcd2-c3d0-43ff-8d92-50435ddcecef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:51.550821 containerd[1979]: time="2025-11-05T16:03:51.550704073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 16:03:51.552201 kubelet[3273]: E1105 16:03:51.552143 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df446974d-wz89l" podUID="7a4ffcd2-c3d0-43ff-8d92-50435ddcecef" Nov 5 16:03:51.728273 kubelet[3273]: E1105 16:03:51.728187 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xbcp7" podUID="8831874b-2bb6-46c1-a079-c45a246f51e1" Nov 5 16:03:51.729400 kubelet[3273]: E1105 16:03:51.729063 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df446974d-wz89l" podUID="7a4ffcd2-c3d0-43ff-8d92-50435ddcecef" Nov 5 16:03:51.862646 containerd[1979]: time="2025-11-05T16:03:51.862498883Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:51.864921 containerd[1979]: time="2025-11-05T16:03:51.864859519Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 16:03:51.865205 containerd[1979]: time="2025-11-05T16:03:51.865098270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 16:03:51.865345 kubelet[3273]: E1105 16:03:51.865297 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:03:51.865461 kubelet[3273]: E1105 16:03:51.865355 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:03:51.865655 kubelet[3273]: E1105 16:03:51.865601 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z48dn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dsvvp_calico-system(6e11cbb7-6c81-460e-9d02-0e852cdd8f6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:51.866259 containerd[1979]: time="2025-11-05T16:03:51.866226174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 16:03:51.867874 kubelet[3273]: E1105 16:03:51.867791 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dsvvp" podUID="6e11cbb7-6c81-460e-9d02-0e852cdd8f6c" Nov 5 16:03:52.201252 containerd[1979]: time="2025-11-05T16:03:52.201203152Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:52.203398 containerd[1979]: time="2025-11-05T16:03:52.203342351Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 16:03:52.203398 containerd[1979]: time="2025-11-05T16:03:52.203355756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 16:03:52.203600 kubelet[3273]: E1105 16:03:52.203573 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:03:52.203645 kubelet[3273]: E1105 16:03:52.203615 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:03:52.203785 kubelet[3273]: E1105 16:03:52.203722 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scrbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c574dd99b-btm8k_calico-system(5ed90129-d345-48e7-a043-180d8e15dcce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:52.204961 kubelet[3273]: E1105 16:03:52.204904 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c574dd99b-btm8k" podUID="5ed90129-d345-48e7-a043-180d8e15dcce" Nov 5 16:03:52.731924 kubelet[3273]: E1105 16:03:52.731857 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c574dd99b-btm8k" podUID="5ed90129-d345-48e7-a043-180d8e15dcce" Nov 5 16:03:52.733138 kubelet[3273]: E1105 16:03:52.731626 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dsvvp" podUID="6e11cbb7-6c81-460e-9d02-0e852cdd8f6c" Nov 5 16:03:53.806853 ntpd[1926]: Listen normally on 6 vxlan.calico 192.168.62.64:123 Nov 5 16:03:53.806925 ntpd[1926]: Listen normally on 7 vxlan.calico [fe80::64e1:15ff:fec8:2b30%4]:123 Nov 5 16:03:53.807596 ntpd[1926]: 5 Nov 16:03:53 ntpd[1926]: Listen normally on 6 vxlan.calico 192.168.62.64:123 Nov 5 16:03:53.807596 ntpd[1926]: 5 Nov 16:03:53 ntpd[1926]: Listen normally on 7 vxlan.calico [fe80::64e1:15ff:fec8:2b30%4]:123 Nov 5 16:03:53.807596 ntpd[1926]: 5 Nov 16:03:53 ntpd[1926]: Listen normally on 8 cali69bcc6e0fe8 [fe80::ecee:eeff:feee:eeee%7]:123 Nov 5 16:03:53.807596 ntpd[1926]: 5 Nov 16:03:53 ntpd[1926]: Listen normally on 9 cali82050a39d04 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 5 16:03:53.807596 ntpd[1926]: 5 Nov 16:03:53 ntpd[1926]: Listen normally on 10 cali8deb9d998ff [fe80::ecee:eeff:feee:eeee%9]:123 Nov 5 16:03:53.807596 ntpd[1926]: 5 Nov 16:03:53 ntpd[1926]: Listen normally on 11 cali3ec9575ddba [fe80::ecee:eeff:feee:eeee%10]:123 Nov 5 16:03:53.807596 ntpd[1926]: 5 Nov 16:03:53 ntpd[1926]: Listen normally on 12 cali708c15bf9a3 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 5 16:03:53.807596 ntpd[1926]: 5 Nov 16:03:53 ntpd[1926]: Listen normally on 13 calidf874e9f7df [fe80::ecee:eeff:feee:eeee%12]:123 Nov 5 16:03:53.807596 ntpd[1926]: 5 Nov 16:03:53 ntpd[1926]: Listen normally on 14 calid0881933d39 [fe80::ecee:eeff:feee:eeee%13]:123 Nov 5 16:03:53.807596 ntpd[1926]: 5 Nov 16:03:53 ntpd[1926]: Listen normally on 15 cali935e24c20c0 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 5 16:03:53.806957 ntpd[1926]: Listen normally on 8 cali69bcc6e0fe8 [fe80::ecee:eeff:feee:eeee%7]:123 Nov 5 16:03:53.806985 ntpd[1926]: Listen normally on 9 cali82050a39d04 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 5 16:03:53.807013 ntpd[1926]: Listen normally on 10 cali8deb9d998ff [fe80::ecee:eeff:feee:eeee%9]:123 Nov 5 16:03:53.807083 ntpd[1926]: Listen normally on 11 cali3ec9575ddba [fe80::ecee:eeff:feee:eeee%10]:123 Nov 5 16:03:53.807112 ntpd[1926]: Listen normally on 12 cali708c15bf9a3 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 5 16:03:53.807152 ntpd[1926]: Listen normally on 13 calidf874e9f7df [fe80::ecee:eeff:feee:eeee%12]:123 Nov 5 16:03:53.807180 ntpd[1926]: Listen normally on 14 calid0881933d39 [fe80::ecee:eeff:feee:eeee%13]:123 Nov 5 16:03:53.807208 ntpd[1926]: Listen normally on 15 cali935e24c20c0 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 5 16:04:02.100552 containerd[1979]: time="2025-11-05T16:04:02.100493779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 16:04:02.408816 containerd[1979]: time="2025-11-05T16:04:02.408693862Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:02.417398 containerd[1979]: time="2025-11-05T16:04:02.417087808Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 16:04:02.417398 containerd[1979]: time="2025-11-05T16:04:02.417100597Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 16:04:02.421227 kubelet[3273]: E1105 16:04:02.421157 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:04:02.427408 kubelet[3273]: E1105 16:04:02.421337 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:04:02.427568 kubelet[3273]: E1105 16:04:02.427484 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk4bn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74b589d999-5tfgh_calico-system(9aac16aa-0990-4e14-a1db-e5abd9a92505): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:02.431917 kubelet[3273]: E1105 16:04:02.431829 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b589d999-5tfgh" podUID="9aac16aa-0990-4e14-a1db-e5abd9a92505" Nov 5 16:04:03.106509 containerd[1979]: time="2025-11-05T16:04:03.106467258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 16:04:03.374426 containerd[1979]: time="2025-11-05T16:04:03.374355763Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:03.377660 containerd[1979]: time="2025-11-05T16:04:03.377594445Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 16:04:03.378088 containerd[1979]: time="2025-11-05T16:04:03.377714878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 16:04:03.378231 kubelet[3273]: E1105 16:04:03.378086 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:04:03.378231 kubelet[3273]: E1105 16:04:03.378140 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:04:03.378594 kubelet[3273]: E1105 16:04:03.378384 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f7c9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xbcp7_calico-system(8831874b-2bb6-46c1-a079-c45a246f51e1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:03.379608 containerd[1979]: time="2025-11-05T16:04:03.379469264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 16:04:03.380631 kubelet[3273]: E1105 16:04:03.380534 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xbcp7" podUID="8831874b-2bb6-46c1-a079-c45a246f51e1" Nov 5 16:04:03.698718 containerd[1979]: time="2025-11-05T16:04:03.698496318Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:03.700920 containerd[1979]: time="2025-11-05T16:04:03.700804562Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 16:04:03.700920 containerd[1979]: time="2025-11-05T16:04:03.700890809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 16:04:03.701295 kubelet[3273]: E1105 16:04:03.701234 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:04:03.701295 kubelet[3273]: E1105 16:04:03.701289 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:04:03.702210 kubelet[3273]: E1105 16:04:03.701688 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4f07a66d4bf44ddd85a38efffe012746,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-scrbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c574dd99b-btm8k_calico-system(5ed90129-d345-48e7-a043-180d8e15dcce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:03.703339 containerd[1979]: time="2025-11-05T16:04:03.701993037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:04:04.009857 containerd[1979]: time="2025-11-05T16:04:04.009672972Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:04.012352 containerd[1979]: time="2025-11-05T16:04:04.012272108Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:04:04.012352 containerd[1979]: time="2025-11-05T16:04:04.012272789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:04:04.012836 kubelet[3273]: E1105 16:04:04.012686 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:04:04.012836 kubelet[3273]: E1105 16:04:04.012739 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:04:04.013152 kubelet[3273]: E1105 16:04:04.013001 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9pg2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6df446974d-wz89l_calico-apiserver(7a4ffcd2-c3d0-43ff-8d92-50435ddcecef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:04.014747 kubelet[3273]: E1105 16:04:04.014701 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df446974d-wz89l" podUID="7a4ffcd2-c3d0-43ff-8d92-50435ddcecef" Nov 5 16:04:04.014917 containerd[1979]: time="2025-11-05T16:04:04.014884336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 16:04:04.329444 containerd[1979]: time="2025-11-05T16:04:04.329190064Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:04.331496 containerd[1979]: time="2025-11-05T16:04:04.331264421Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 16:04:04.331496 containerd[1979]: time="2025-11-05T16:04:04.331398818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 16:04:04.332556 kubelet[3273]: E1105 16:04:04.332333 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:04:04.333043 kubelet[3273]: E1105 16:04:04.332534 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:04:04.334092 kubelet[3273]: E1105 16:04:04.333779 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scrbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c574dd99b-btm8k_calico-system(5ed90129-d345-48e7-a043-180d8e15dcce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:04.335469 kubelet[3273]: E1105 16:04:04.335412 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c574dd99b-btm8k" podUID="5ed90129-d345-48e7-a043-180d8e15dcce" Nov 5 16:04:05.099625 containerd[1979]: time="2025-11-05T16:04:05.099589722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:04:05.104324 systemd[1]: Started sshd@8-172.31.17.172:22-139.178.68.195:50458.service - OpenSSH per-connection server daemon (139.178.68.195:50458). Nov 5 16:04:05.379730 containerd[1979]: time="2025-11-05T16:04:05.379650196Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:05.381501 sshd[5445]: Accepted publickey for core from 139.178.68.195 port 50458 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:05.382587 containerd[1979]: time="2025-11-05T16:04:05.382009099Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:04:05.382587 containerd[1979]: time="2025-11-05T16:04:05.382151894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:04:05.383249 kubelet[3273]: E1105 16:04:05.382332 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:04:05.383249 kubelet[3273]: E1105 16:04:05.382389 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:04:05.383249 kubelet[3273]: E1105 16:04:05.382552 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8d8tr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6df446974d-5p6n9_calico-apiserver(97bb7728-1652-4f73-a3fd-5b00174bed72): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:05.383990 kubelet[3273]: E1105 16:04:05.383837 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df446974d-5p6n9" podUID="97bb7728-1652-4f73-a3fd-5b00174bed72" Nov 5 16:04:05.385428 sshd-session[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:05.399340 systemd-logind[1939]: New session 8 of user core. Nov 5 16:04:05.409390 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 16:04:06.352253 sshd[5455]: Connection closed by 139.178.68.195 port 50458 Nov 5 16:04:06.353171 sshd-session[5445]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:06.359699 systemd-logind[1939]: Session 8 logged out. Waiting for processes to exit. Nov 5 16:04:06.359977 systemd[1]: sshd@8-172.31.17.172:22-139.178.68.195:50458.service: Deactivated successfully. Nov 5 16:04:06.362665 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 16:04:06.365425 systemd-logind[1939]: Removed session 8. Nov 5 16:04:07.097283 containerd[1979]: time="2025-11-05T16:04:07.096904386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 16:04:07.381204 containerd[1979]: time="2025-11-05T16:04:07.381158157Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:07.383419 containerd[1979]: time="2025-11-05T16:04:07.383354388Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 16:04:07.383525 containerd[1979]: time="2025-11-05T16:04:07.383445961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 16:04:07.383669 kubelet[3273]: E1105 16:04:07.383606 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:04:07.383669 kubelet[3273]: E1105 16:04:07.383655 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:04:07.384158 kubelet[3273]: E1105 16:04:07.383784 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z48dn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dsvvp_calico-system(6e11cbb7-6c81-460e-9d02-0e852cdd8f6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:07.385614 containerd[1979]: time="2025-11-05T16:04:07.385555984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 16:04:07.892600 containerd[1979]: time="2025-11-05T16:04:07.892548330Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:07.894712 containerd[1979]: time="2025-11-05T16:04:07.894658975Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 16:04:07.894919 containerd[1979]: time="2025-11-05T16:04:07.894744205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 16:04:07.894995 kubelet[3273]: E1105 16:04:07.894881 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:04:07.894995 kubelet[3273]: E1105 16:04:07.894923 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:04:07.895501 kubelet[3273]: E1105 16:04:07.895070 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z48dn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dsvvp_calico-system(6e11cbb7-6c81-460e-9d02-0e852cdd8f6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:07.896841 kubelet[3273]: E1105 16:04:07.896786 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dsvvp" podUID="6e11cbb7-6c81-460e-9d02-0e852cdd8f6c" Nov 5 16:04:11.393698 systemd[1]: Started sshd@9-172.31.17.172:22-139.178.68.195:50466.service - OpenSSH per-connection server daemon (139.178.68.195:50466). Nov 5 16:04:11.585626 sshd[5470]: Accepted publickey for core from 139.178.68.195 port 50466 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:11.587906 sshd-session[5470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:11.594553 systemd-logind[1939]: New session 9 of user core. Nov 5 16:04:11.597228 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 16:04:11.810451 sshd[5473]: Connection closed by 139.178.68.195 port 50466 Nov 5 16:04:11.811015 sshd-session[5470]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:11.815404 systemd[1]: sshd@9-172.31.17.172:22-139.178.68.195:50466.service: Deactivated successfully. Nov 5 16:04:11.817607 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 16:04:11.818739 systemd-logind[1939]: Session 9 logged out. Waiting for processes to exit. Nov 5 16:04:11.821286 systemd-logind[1939]: Removed session 9. Nov 5 16:04:12.265483 containerd[1979]: time="2025-11-05T16:04:12.265445396Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e97dcc3fd4ab92721f81610c69d03ab92a58fe651e3a44a8865f3f1df18ca6c\" id:\"d11772e50ff7cf4707e4887289a36c9603b418e40ff146351d3b06b1056fef48\" pid:5496 exited_at:{seconds:1762358652 nanos:264936506}" Nov 5 16:04:16.097159 kubelet[3273]: E1105 16:04:16.096137 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df446974d-wz89l" podUID="7a4ffcd2-c3d0-43ff-8d92-50435ddcecef" Nov 5 16:04:16.097159 kubelet[3273]: E1105 16:04:16.096550 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b589d999-5tfgh" podUID="9aac16aa-0990-4e14-a1db-e5abd9a92505" Nov 5 16:04:16.844180 systemd[1]: Started sshd@10-172.31.17.172:22-139.178.68.195:59832.service - OpenSSH per-connection server daemon (139.178.68.195:59832). Nov 5 16:04:17.059850 sshd[5509]: Accepted publickey for core from 139.178.68.195 port 59832 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:17.061560 sshd-session[5509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:17.067581 systemd-logind[1939]: New session 10 of user core. Nov 5 16:04:17.072313 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 16:04:17.362407 sshd[5512]: Connection closed by 139.178.68.195 port 59832 Nov 5 16:04:17.391052 sshd-session[5509]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:17.397811 systemd[1]: sshd@10-172.31.17.172:22-139.178.68.195:59832.service: Deactivated successfully. Nov 5 16:04:17.400670 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 16:04:17.404131 systemd-logind[1939]: Session 10 logged out. Waiting for processes to exit. Nov 5 16:04:17.406398 systemd[1]: Started sshd@11-172.31.17.172:22-139.178.68.195:59834.service - OpenSSH per-connection server daemon (139.178.68.195:59834). Nov 5 16:04:17.408556 systemd-logind[1939]: Removed session 10. Nov 5 16:04:17.609239 sshd[5524]: Accepted publickey for core from 139.178.68.195 port 59834 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:17.611721 sshd-session[5524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:17.619897 systemd-logind[1939]: New session 11 of user core. Nov 5 16:04:17.632290 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 16:04:17.950415 sshd[5527]: Connection closed by 139.178.68.195 port 59834 Nov 5 16:04:17.951599 sshd-session[5524]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:17.962782 systemd[1]: sshd@11-172.31.17.172:22-139.178.68.195:59834.service: Deactivated successfully. Nov 5 16:04:17.966466 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 16:04:17.969975 systemd-logind[1939]: Session 11 logged out. Waiting for processes to exit. Nov 5 16:04:17.974223 systemd-logind[1939]: Removed session 11. Nov 5 16:04:17.992542 systemd[1]: Started sshd@12-172.31.17.172:22-139.178.68.195:59848.service - OpenSSH per-connection server daemon (139.178.68.195:59848). Nov 5 16:04:18.098519 kubelet[3273]: E1105 16:04:18.098246 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df446974d-5p6n9" podUID="97bb7728-1652-4f73-a3fd-5b00174bed72" Nov 5 16:04:18.098519 kubelet[3273]: E1105 16:04:18.098356 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xbcp7" podUID="8831874b-2bb6-46c1-a079-c45a246f51e1" Nov 5 16:04:18.098519 kubelet[3273]: E1105 16:04:18.098462 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c574dd99b-btm8k" podUID="5ed90129-d345-48e7-a043-180d8e15dcce" Nov 5 16:04:18.191849 sshd[5537]: Accepted publickey for core from 139.178.68.195 port 59848 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:18.193209 sshd-session[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:18.205419 systemd-logind[1939]: New session 12 of user core. Nov 5 16:04:18.213902 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 16:04:18.572897 sshd[5540]: Connection closed by 139.178.68.195 port 59848 Nov 5 16:04:18.582265 sshd-session[5537]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:18.598492 systemd[1]: sshd@12-172.31.17.172:22-139.178.68.195:59848.service: Deactivated successfully. Nov 5 16:04:18.603611 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 16:04:18.605331 systemd-logind[1939]: Session 12 logged out. Waiting for processes to exit. Nov 5 16:04:18.609299 systemd-logind[1939]: Removed session 12. Nov 5 16:04:21.098573 kubelet[3273]: E1105 16:04:21.098405 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dsvvp" podUID="6e11cbb7-6c81-460e-9d02-0e852cdd8f6c" Nov 5 16:04:23.607618 systemd[1]: Started sshd@13-172.31.17.172:22-139.178.68.195:43480.service - OpenSSH per-connection server daemon (139.178.68.195:43480). Nov 5 16:04:23.857014 sshd[5553]: Accepted publickey for core from 139.178.68.195 port 43480 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:23.860089 sshd-session[5553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:23.868625 systemd-logind[1939]: New session 13 of user core. Nov 5 16:04:23.875389 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 16:04:24.168135 sshd[5556]: Connection closed by 139.178.68.195 port 43480 Nov 5 16:04:24.169653 sshd-session[5553]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:24.190365 systemd[1]: sshd@13-172.31.17.172:22-139.178.68.195:43480.service: Deactivated successfully. Nov 5 16:04:24.194278 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 16:04:24.196311 systemd-logind[1939]: Session 13 logged out. Waiting for processes to exit. Nov 5 16:04:24.198760 systemd-logind[1939]: Removed session 13. Nov 5 16:04:29.097115 containerd[1979]: time="2025-11-05T16:04:29.097063185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 16:04:29.208169 systemd[1]: Started sshd@14-172.31.17.172:22-139.178.68.195:43494.service - OpenSSH per-connection server daemon (139.178.68.195:43494). Nov 5 16:04:29.391791 sshd[5580]: Accepted publickey for core from 139.178.68.195 port 43494 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:29.393478 sshd-session[5580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:29.399167 systemd-logind[1939]: New session 14 of user core. Nov 5 16:04:29.406314 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 16:04:29.453208 containerd[1979]: time="2025-11-05T16:04:29.453125159Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:29.455639 containerd[1979]: time="2025-11-05T16:04:29.455568462Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 16:04:29.455639 containerd[1979]: time="2025-11-05T16:04:29.455589907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 16:04:29.455990 kubelet[3273]: E1105 16:04:29.455828 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:04:29.455990 kubelet[3273]: E1105 16:04:29.455882 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:04:29.456734 kubelet[3273]: E1105 16:04:29.456088 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f7c9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xbcp7_calico-system(8831874b-2bb6-46c1-a079-c45a246f51e1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:29.458000 kubelet[3273]: E1105 16:04:29.457930 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xbcp7" podUID="8831874b-2bb6-46c1-a079-c45a246f51e1" Nov 5 16:04:29.705504 sshd[5583]: Connection closed by 139.178.68.195 port 43494 Nov 5 16:04:29.706545 sshd-session[5580]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:29.710558 systemd[1]: sshd@14-172.31.17.172:22-139.178.68.195:43494.service: Deactivated successfully. Nov 5 16:04:29.713386 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 16:04:29.716100 systemd-logind[1939]: Session 14 logged out. Waiting for processes to exit. Nov 5 16:04:29.719723 systemd-logind[1939]: Removed session 14. Nov 5 16:04:30.096116 containerd[1979]: time="2025-11-05T16:04:30.095849385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 16:04:30.529855 containerd[1979]: time="2025-11-05T16:04:30.529807235Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:30.532178 containerd[1979]: time="2025-11-05T16:04:30.532106521Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 16:04:30.532596 containerd[1979]: time="2025-11-05T16:04:30.532198522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 16:04:30.532655 kubelet[3273]: E1105 16:04:30.532509 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:04:30.532655 kubelet[3273]: E1105 16:04:30.532556 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:04:30.533115 kubelet[3273]: E1105 16:04:30.532656 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4f07a66d4bf44ddd85a38efffe012746,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-scrbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c574dd99b-btm8k_calico-system(5ed90129-d345-48e7-a043-180d8e15dcce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:30.535960 containerd[1979]: time="2025-11-05T16:04:30.535917544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 16:04:30.856930 containerd[1979]: time="2025-11-05T16:04:30.856801332Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:30.858913 containerd[1979]: time="2025-11-05T16:04:30.858850518Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 16:04:30.859119 containerd[1979]: time="2025-11-05T16:04:30.858887609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 16:04:30.859193 kubelet[3273]: E1105 16:04:30.859138 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:04:30.859193 kubelet[3273]: E1105 16:04:30.859186 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:04:30.859557 kubelet[3273]: E1105 16:04:30.859414 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scrbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c574dd99b-btm8k_calico-system(5ed90129-d345-48e7-a043-180d8e15dcce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:30.861264 kubelet[3273]: E1105 16:04:30.860855 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c574dd99b-btm8k" podUID="5ed90129-d345-48e7-a043-180d8e15dcce" Nov 5 16:04:31.098486 containerd[1979]: time="2025-11-05T16:04:31.098196926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 16:04:31.384116 containerd[1979]: time="2025-11-05T16:04:31.383934549Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:31.386613 containerd[1979]: time="2025-11-05T16:04:31.386484364Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 16:04:31.386613 containerd[1979]: time="2025-11-05T16:04:31.386572849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 16:04:31.386790 kubelet[3273]: E1105 16:04:31.386720 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:04:31.386790 kubelet[3273]: E1105 16:04:31.386762 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:04:31.387279 containerd[1979]: time="2025-11-05T16:04:31.387245086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:04:31.399866 kubelet[3273]: E1105 16:04:31.399689 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk4bn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74b589d999-5tfgh_calico-system(9aac16aa-0990-4e14-a1db-e5abd9a92505): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:31.401298 kubelet[3273]: E1105 16:04:31.401252 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b589d999-5tfgh" podUID="9aac16aa-0990-4e14-a1db-e5abd9a92505" Nov 5 16:04:31.665923 containerd[1979]: time="2025-11-05T16:04:31.665793790Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:31.668762 containerd[1979]: time="2025-11-05T16:04:31.668454697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:04:31.668762 containerd[1979]: time="2025-11-05T16:04:31.668630721Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:04:31.669129 kubelet[3273]: E1105 16:04:31.669082 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:04:31.669950 kubelet[3273]: E1105 16:04:31.669135 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:04:31.669950 kubelet[3273]: E1105 16:04:31.669263 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9pg2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6df446974d-wz89l_calico-apiserver(7a4ffcd2-c3d0-43ff-8d92-50435ddcecef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:31.670857 kubelet[3273]: E1105 16:04:31.670804 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df446974d-wz89l" podUID="7a4ffcd2-c3d0-43ff-8d92-50435ddcecef" Nov 5 16:04:33.100043 containerd[1979]: time="2025-11-05T16:04:33.099082668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:04:33.395678 containerd[1979]: time="2025-11-05T16:04:33.395585371Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:33.397798 containerd[1979]: time="2025-11-05T16:04:33.397731206Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:04:33.398219 containerd[1979]: time="2025-11-05T16:04:33.397839683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:04:33.399420 kubelet[3273]: E1105 16:04:33.398129 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:04:33.399420 kubelet[3273]: E1105 16:04:33.398207 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:04:33.399420 kubelet[3273]: E1105 16:04:33.398529 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8d8tr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6df446974d-5p6n9_calico-apiserver(97bb7728-1652-4f73-a3fd-5b00174bed72): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:33.399953 kubelet[3273]: E1105 16:04:33.399880 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df446974d-5p6n9" podUID="97bb7728-1652-4f73-a3fd-5b00174bed72" Nov 5 16:04:33.400701 containerd[1979]: time="2025-11-05T16:04:33.400217388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 16:04:33.704436 containerd[1979]: time="2025-11-05T16:04:33.704294718Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:33.706904 containerd[1979]: time="2025-11-05T16:04:33.706840267Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 16:04:33.707058 containerd[1979]: time="2025-11-05T16:04:33.706957850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 16:04:33.707187 kubelet[3273]: E1105 16:04:33.707143 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:04:33.707294 kubelet[3273]: E1105 16:04:33.707201 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:04:33.707386 kubelet[3273]: E1105 16:04:33.707337 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z48dn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dsvvp_calico-system(6e11cbb7-6c81-460e-9d02-0e852cdd8f6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:33.710029 containerd[1979]: time="2025-11-05T16:04:33.709983463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 16:04:34.035646 containerd[1979]: time="2025-11-05T16:04:34.035374805Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:34.037801 containerd[1979]: time="2025-11-05T16:04:34.037721570Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 16:04:34.037903 containerd[1979]: time="2025-11-05T16:04:34.037814096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 16:04:34.038086 kubelet[3273]: E1105 16:04:34.037963 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:04:34.038086 kubelet[3273]: E1105 16:04:34.038014 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:04:34.038608 kubelet[3273]: E1105 16:04:34.038315 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z48dn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dsvvp_calico-system(6e11cbb7-6c81-460e-9d02-0e852cdd8f6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:34.040224 kubelet[3273]: E1105 16:04:34.040180 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dsvvp" podUID="6e11cbb7-6c81-460e-9d02-0e852cdd8f6c" Nov 5 16:04:34.747829 systemd[1]: Started sshd@15-172.31.17.172:22-139.178.68.195:46738.service - OpenSSH per-connection server daemon (139.178.68.195:46738). Nov 5 16:04:34.977911 sshd[5598]: Accepted publickey for core from 139.178.68.195 port 46738 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:34.980721 sshd-session[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:34.988306 systemd-logind[1939]: New session 15 of user core. Nov 5 16:04:34.993235 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 16:04:35.346050 sshd[5601]: Connection closed by 139.178.68.195 port 46738 Nov 5 16:04:35.348162 sshd-session[5598]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:35.364989 systemd[1]: sshd@15-172.31.17.172:22-139.178.68.195:46738.service: Deactivated successfully. Nov 5 16:04:35.369712 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 16:04:35.373078 systemd-logind[1939]: Session 15 logged out. Waiting for processes to exit. Nov 5 16:04:35.375656 systemd-logind[1939]: Removed session 15. Nov 5 16:04:40.386059 systemd[1]: Started sshd@16-172.31.17.172:22-139.178.68.195:46742.service - OpenSSH per-connection server daemon (139.178.68.195:46742). Nov 5 16:04:40.703533 sshd[5614]: Accepted publickey for core from 139.178.68.195 port 46742 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:40.727574 sshd-session[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:40.737228 systemd-logind[1939]: New session 16 of user core. Nov 5 16:04:40.742446 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 16:04:41.154175 sshd[5617]: Connection closed by 139.178.68.195 port 46742 Nov 5 16:04:41.155266 sshd-session[5614]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:41.165248 systemd[1]: sshd@16-172.31.17.172:22-139.178.68.195:46742.service: Deactivated successfully. Nov 5 16:04:41.165494 systemd-logind[1939]: Session 16 logged out. Waiting for processes to exit. Nov 5 16:04:41.171731 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 16:04:41.194315 systemd-logind[1939]: Removed session 16. Nov 5 16:04:41.195113 systemd[1]: Started sshd@17-172.31.17.172:22-139.178.68.195:46744.service - OpenSSH per-connection server daemon (139.178.68.195:46744). Nov 5 16:04:41.439885 sshd[5629]: Accepted publickey for core from 139.178.68.195 port 46744 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:41.441199 sshd-session[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:41.448092 systemd-logind[1939]: New session 17 of user core. Nov 5 16:04:41.455282 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 16:04:42.194633 containerd[1979]: time="2025-11-05T16:04:42.194526355Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e97dcc3fd4ab92721f81610c69d03ab92a58fe651e3a44a8865f3f1df18ca6c\" id:\"afba65f20b14558e0b0404f46242e4d5ee98371799badc461da920187e8a2a98\" pid:5649 exited_at:{seconds:1762358682 nanos:193691268}" Nov 5 16:04:42.592206 sshd[5632]: Connection closed by 139.178.68.195 port 46744 Nov 5 16:04:42.592848 sshd-session[5629]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:42.600133 systemd[1]: sshd@17-172.31.17.172:22-139.178.68.195:46744.service: Deactivated successfully. Nov 5 16:04:42.605701 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 16:04:42.608016 systemd-logind[1939]: Session 17 logged out. Waiting for processes to exit. Nov 5 16:04:42.611069 systemd-logind[1939]: Removed session 17. Nov 5 16:04:42.630249 systemd[1]: Started sshd@18-172.31.17.172:22-139.178.68.195:46748.service - OpenSSH per-connection server daemon (139.178.68.195:46748). Nov 5 16:04:42.864519 sshd[5667]: Accepted publickey for core from 139.178.68.195 port 46748 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:42.866889 sshd-session[5667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:42.876093 systemd-logind[1939]: New session 18 of user core. Nov 5 16:04:42.883726 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 16:04:44.097617 kubelet[3273]: E1105 16:04:44.096919 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xbcp7" podUID="8831874b-2bb6-46c1-a079-c45a246f51e1" Nov 5 16:04:44.099686 kubelet[3273]: E1105 16:04:44.099594 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c574dd99b-btm8k" podUID="5ed90129-d345-48e7-a043-180d8e15dcce" Nov 5 16:04:44.235222 sshd[5670]: Connection closed by 139.178.68.195 port 46748 Nov 5 16:04:44.236606 sshd-session[5667]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:44.244850 systemd[1]: sshd@18-172.31.17.172:22-139.178.68.195:46748.service: Deactivated successfully. Nov 5 16:04:44.245364 systemd-logind[1939]: Session 18 logged out. Waiting for processes to exit. Nov 5 16:04:44.249741 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 16:04:44.254875 systemd-logind[1939]: Removed session 18. Nov 5 16:04:44.273933 systemd[1]: Started sshd@19-172.31.17.172:22-139.178.68.195:40562.service - OpenSSH per-connection server daemon (139.178.68.195:40562). Nov 5 16:04:44.495950 sshd[5693]: Accepted publickey for core from 139.178.68.195 port 40562 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:44.497823 sshd-session[5693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:44.507340 systemd-logind[1939]: New session 19 of user core. Nov 5 16:04:44.513267 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 16:04:45.107162 kubelet[3273]: E1105 16:04:45.104917 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b589d999-5tfgh" podUID="9aac16aa-0990-4e14-a1db-e5abd9a92505" Nov 5 16:04:45.120011 kubelet[3273]: E1105 16:04:45.119839 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dsvvp" podUID="6e11cbb7-6c81-460e-9d02-0e852cdd8f6c" Nov 5 16:04:45.361414 sshd[5696]: Connection closed by 139.178.68.195 port 40562 Nov 5 16:04:45.368568 sshd-session[5693]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:45.381620 systemd[1]: sshd@19-172.31.17.172:22-139.178.68.195:40562.service: Deactivated successfully. Nov 5 16:04:45.386603 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 16:04:45.388704 systemd-logind[1939]: Session 19 logged out. Waiting for processes to exit. Nov 5 16:04:45.416145 systemd[1]: Started sshd@20-172.31.17.172:22-139.178.68.195:40572.service - OpenSSH per-connection server daemon (139.178.68.195:40572). Nov 5 16:04:45.418075 systemd-logind[1939]: Removed session 19. Nov 5 16:04:45.627377 sshd[5706]: Accepted publickey for core from 139.178.68.195 port 40572 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:45.629568 sshd-session[5706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:45.636469 systemd-logind[1939]: New session 20 of user core. Nov 5 16:04:45.643344 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 16:04:45.942067 sshd[5709]: Connection closed by 139.178.68.195 port 40572 Nov 5 16:04:45.941351 sshd-session[5706]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:45.949202 systemd[1]: sshd@20-172.31.17.172:22-139.178.68.195:40572.service: Deactivated successfully. Nov 5 16:04:45.950195 systemd-logind[1939]: Session 20 logged out. Waiting for processes to exit. Nov 5 16:04:45.954809 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 16:04:45.959983 systemd-logind[1939]: Removed session 20. Nov 5 16:04:47.097726 kubelet[3273]: E1105 16:04:47.097460 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df446974d-wz89l" podUID="7a4ffcd2-c3d0-43ff-8d92-50435ddcecef" Nov 5 16:04:49.100118 kubelet[3273]: E1105 16:04:49.099921 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df446974d-5p6n9" podUID="97bb7728-1652-4f73-a3fd-5b00174bed72" Nov 5 16:04:50.988429 systemd[1]: Started sshd@21-172.31.17.172:22-139.178.68.195:40584.service - OpenSSH per-connection server daemon (139.178.68.195:40584). Nov 5 16:04:51.198323 sshd[5727]: Accepted publickey for core from 139.178.68.195 port 40584 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:51.202476 sshd-session[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:51.217103 systemd-logind[1939]: New session 21 of user core. Nov 5 16:04:51.219643 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 16:04:51.596684 sshd[5730]: Connection closed by 139.178.68.195 port 40584 Nov 5 16:04:51.598985 sshd-session[5727]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:51.608727 systemd[1]: sshd@21-172.31.17.172:22-139.178.68.195:40584.service: Deactivated successfully. Nov 5 16:04:51.609058 systemd-logind[1939]: Session 21 logged out. Waiting for processes to exit. Nov 5 16:04:51.612057 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 16:04:51.615472 systemd-logind[1939]: Removed session 21. Nov 5 16:04:55.099558 kubelet[3273]: E1105 16:04:55.099282 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c574dd99b-btm8k" podUID="5ed90129-d345-48e7-a043-180d8e15dcce" Nov 5 16:04:56.633477 systemd[1]: Started sshd@22-172.31.17.172:22-139.178.68.195:35662.service - OpenSSH per-connection server daemon (139.178.68.195:35662). Nov 5 16:04:56.880231 sshd[5742]: Accepted publickey for core from 139.178.68.195 port 35662 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:56.882299 sshd-session[5742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:56.890912 systemd-logind[1939]: New session 22 of user core. Nov 5 16:04:56.898280 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 16:04:57.198122 sshd[5745]: Connection closed by 139.178.68.195 port 35662 Nov 5 16:04:57.198726 sshd-session[5742]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:57.207470 systemd-logind[1939]: Session 22 logged out. Waiting for processes to exit. Nov 5 16:04:57.210691 systemd[1]: sshd@22-172.31.17.172:22-139.178.68.195:35662.service: Deactivated successfully. Nov 5 16:04:57.215864 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 16:04:57.220533 systemd-logind[1939]: Removed session 22. Nov 5 16:04:58.096864 kubelet[3273]: E1105 16:04:58.096760 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xbcp7" podUID="8831874b-2bb6-46c1-a079-c45a246f51e1" Nov 5 16:04:58.099044 kubelet[3273]: E1105 16:04:58.098887 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df446974d-wz89l" podUID="7a4ffcd2-c3d0-43ff-8d92-50435ddcecef" Nov 5 16:04:58.100053 kubelet[3273]: E1105 16:04:58.099837 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dsvvp" podUID="6e11cbb7-6c81-460e-9d02-0e852cdd8f6c" Nov 5 16:05:00.096869 kubelet[3273]: E1105 16:05:00.096810 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b589d999-5tfgh" podUID="9aac16aa-0990-4e14-a1db-e5abd9a92505" Nov 5 16:05:02.098282 kubelet[3273]: E1105 16:05:02.097541 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df446974d-5p6n9" podUID="97bb7728-1652-4f73-a3fd-5b00174bed72" Nov 5 16:05:02.259508 systemd[1]: Started sshd@23-172.31.17.172:22-139.178.68.195:35668.service - OpenSSH per-connection server daemon (139.178.68.195:35668). Nov 5 16:05:02.496181 sshd[5757]: Accepted publickey for core from 139.178.68.195 port 35668 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:05:02.501659 sshd-session[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:05:02.523128 systemd-logind[1939]: New session 23 of user core. Nov 5 16:05:02.536637 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 16:05:02.884591 sshd[5760]: Connection closed by 139.178.68.195 port 35668 Nov 5 16:05:02.887364 sshd-session[5757]: pam_unix(sshd:session): session closed for user core Nov 5 16:05:02.894500 systemd[1]: sshd@23-172.31.17.172:22-139.178.68.195:35668.service: Deactivated successfully. Nov 5 16:05:02.900007 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 16:05:02.904170 systemd-logind[1939]: Session 23 logged out. Waiting for processes to exit. Nov 5 16:05:02.908338 systemd-logind[1939]: Removed session 23. Nov 5 16:05:07.101725 kubelet[3273]: E1105 16:05:07.101664 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c574dd99b-btm8k" podUID="5ed90129-d345-48e7-a043-180d8e15dcce" Nov 5 16:05:07.927721 systemd[1]: Started sshd@24-172.31.17.172:22-139.178.68.195:32990.service - OpenSSH per-connection server daemon (139.178.68.195:32990). Nov 5 16:05:08.125149 sshd[5780]: Accepted publickey for core from 139.178.68.195 port 32990 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:05:08.127776 sshd-session[5780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:05:08.137296 systemd-logind[1939]: New session 24 of user core. Nov 5 16:05:08.144342 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 16:05:08.384050 sshd[5783]: Connection closed by 139.178.68.195 port 32990 Nov 5 16:05:08.384796 sshd-session[5780]: pam_unix(sshd:session): session closed for user core Nov 5 16:05:08.391786 systemd-logind[1939]: Session 24 logged out. Waiting for processes to exit. Nov 5 16:05:08.392564 systemd[1]: sshd@24-172.31.17.172:22-139.178.68.195:32990.service: Deactivated successfully. Nov 5 16:05:08.398240 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 16:05:08.404746 systemd-logind[1939]: Removed session 24. Nov 5 16:05:10.096181 kubelet[3273]: E1105 16:05:10.096041 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df446974d-wz89l" podUID="7a4ffcd2-c3d0-43ff-8d92-50435ddcecef" Nov 5 16:05:10.116810 containerd[1979]: time="2025-11-05T16:05:10.097005910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 16:05:10.423067 containerd[1979]: time="2025-11-05T16:05:10.422997691Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:10.425469 containerd[1979]: time="2025-11-05T16:05:10.425363833Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 16:05:10.426086 containerd[1979]: time="2025-11-05T16:05:10.425420438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 16:05:10.426332 kubelet[3273]: E1105 16:05:10.426202 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:05:10.426504 kubelet[3273]: E1105 16:05:10.426452 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:05:10.427222 kubelet[3273]: E1105 16:05:10.427147 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f7c9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xbcp7_calico-system(8831874b-2bb6-46c1-a079-c45a246f51e1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:10.428638 kubelet[3273]: E1105 16:05:10.428553 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xbcp7" podUID="8831874b-2bb6-46c1-a079-c45a246f51e1" Nov 5 16:05:11.097579 kubelet[3273]: E1105 16:05:11.097151 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b589d999-5tfgh" podUID="9aac16aa-0990-4e14-a1db-e5abd9a92505" Nov 5 16:05:12.097268 kubelet[3273]: E1105 16:05:12.097140 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dsvvp" podUID="6e11cbb7-6c81-460e-9d02-0e852cdd8f6c" Nov 5 16:05:12.213146 containerd[1979]: time="2025-11-05T16:05:12.213090305Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e97dcc3fd4ab92721f81610c69d03ab92a58fe651e3a44a8865f3f1df18ca6c\" id:\"cf91d24d6244759b2ada5e3cd2205c25f6cd95c30a25b0d9ce2477425cb5ba7a\" pid:5808 exited_at:{seconds:1762358712 nanos:212418720}" Nov 5 16:05:13.425422 systemd[1]: Started sshd@25-172.31.17.172:22-139.178.68.195:50352.service - OpenSSH per-connection server daemon (139.178.68.195:50352). Nov 5 16:05:13.629042 sshd[5822]: Accepted publickey for core from 139.178.68.195 port 50352 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:05:13.631405 sshd-session[5822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:05:13.640615 systemd-logind[1939]: New session 25 of user core. Nov 5 16:05:13.649278 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 5 16:05:13.930954 sshd[5825]: Connection closed by 139.178.68.195 port 50352 Nov 5 16:05:13.932574 sshd-session[5822]: pam_unix(sshd:session): session closed for user core Nov 5 16:05:13.938147 systemd[1]: sshd@25-172.31.17.172:22-139.178.68.195:50352.service: Deactivated successfully. Nov 5 16:05:13.940934 systemd[1]: session-25.scope: Deactivated successfully. Nov 5 16:05:13.945929 systemd-logind[1939]: Session 25 logged out. Waiting for processes to exit. Nov 5 16:05:13.947974 systemd-logind[1939]: Removed session 25. Nov 5 16:05:14.097850 containerd[1979]: time="2025-11-05T16:05:14.097805315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:05:14.394549 containerd[1979]: time="2025-11-05T16:05:14.394491422Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:14.397251 containerd[1979]: time="2025-11-05T16:05:14.397149201Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:05:14.397251 containerd[1979]: time="2025-11-05T16:05:14.397264960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:05:14.397664 kubelet[3273]: E1105 16:05:14.397438 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:05:14.397664 kubelet[3273]: E1105 16:05:14.397505 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:05:14.398458 kubelet[3273]: E1105 16:05:14.397661 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8d8tr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6df446974d-5p6n9_calico-apiserver(97bb7728-1652-4f73-a3fd-5b00174bed72): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:14.399152 kubelet[3273]: E1105 16:05:14.399106 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df446974d-5p6n9" podUID="97bb7728-1652-4f73-a3fd-5b00174bed72" Nov 5 16:05:18.100042 containerd[1979]: time="2025-11-05T16:05:18.099123566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 16:05:18.437634 containerd[1979]: time="2025-11-05T16:05:18.437574662Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:18.441213 containerd[1979]: time="2025-11-05T16:05:18.441060461Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 16:05:18.441213 containerd[1979]: time="2025-11-05T16:05:18.441173688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 16:05:18.442260 kubelet[3273]: E1105 16:05:18.441605 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:05:18.442260 kubelet[3273]: E1105 16:05:18.441661 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:05:18.442260 kubelet[3273]: E1105 16:05:18.441788 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4f07a66d4bf44ddd85a38efffe012746,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-scrbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c574dd99b-btm8k_calico-system(5ed90129-d345-48e7-a043-180d8e15dcce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:18.446708 containerd[1979]: time="2025-11-05T16:05:18.446668818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 16:05:18.750756 containerd[1979]: time="2025-11-05T16:05:18.750063360Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:18.752674 containerd[1979]: time="2025-11-05T16:05:18.752169431Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 16:05:18.755174 kubelet[3273]: E1105 16:05:18.755123 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:05:18.758299 kubelet[3273]: E1105 16:05:18.757535 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:05:18.758299 kubelet[3273]: E1105 16:05:18.757744 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scrbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c574dd99b-btm8k_calico-system(5ed90129-d345-48e7-a043-180d8e15dcce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:18.760631 kubelet[3273]: E1105 16:05:18.760534 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c574dd99b-btm8k" podUID="5ed90129-d345-48e7-a043-180d8e15dcce" Nov 5 16:05:18.768220 containerd[1979]: time="2025-11-05T16:05:18.752465395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 16:05:18.972673 systemd[1]: Started sshd@26-172.31.17.172:22-139.178.68.195:50360.service - OpenSSH per-connection server daemon (139.178.68.195:50360). Nov 5 16:05:19.220576 sshd[5851]: Accepted publickey for core from 139.178.68.195 port 50360 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:05:19.225385 sshd-session[5851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:05:19.234359 systemd-logind[1939]: New session 26 of user core. Nov 5 16:05:19.241243 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 5 16:05:19.688614 sshd[5855]: Connection closed by 139.178.68.195 port 50360 Nov 5 16:05:19.690811 sshd-session[5851]: pam_unix(sshd:session): session closed for user core Nov 5 16:05:19.723461 systemd[1]: sshd@26-172.31.17.172:22-139.178.68.195:50360.service: Deactivated successfully. Nov 5 16:05:19.723916 systemd-logind[1939]: Session 26 logged out. Waiting for processes to exit. Nov 5 16:05:19.726685 systemd[1]: session-26.scope: Deactivated successfully. Nov 5 16:05:19.730994 systemd-logind[1939]: Removed session 26. Nov 5 16:05:22.097180 kubelet[3273]: E1105 16:05:22.097068 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xbcp7" podUID="8831874b-2bb6-46c1-a079-c45a246f51e1" Nov 5 16:05:23.100626 containerd[1979]: time="2025-11-05T16:05:23.099873500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 16:05:23.436187 containerd[1979]: time="2025-11-05T16:05:23.436140262Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:23.438649 containerd[1979]: time="2025-11-05T16:05:23.438522774Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 16:05:23.438649 containerd[1979]: time="2025-11-05T16:05:23.438616410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 16:05:23.439712 kubelet[3273]: E1105 16:05:23.438825 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:05:23.439712 kubelet[3273]: E1105 16:05:23.438880 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:05:23.439712 kubelet[3273]: E1105 16:05:23.439055 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk4bn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74b589d999-5tfgh_calico-system(9aac16aa-0990-4e14-a1db-e5abd9a92505): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:23.440470 kubelet[3273]: E1105 16:05:23.440420 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b589d999-5tfgh" podUID="9aac16aa-0990-4e14-a1db-e5abd9a92505" Nov 5 16:05:24.099013 containerd[1979]: time="2025-11-05T16:05:24.098710488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:05:24.430670 containerd[1979]: time="2025-11-05T16:05:24.430612345Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:24.433093 containerd[1979]: time="2025-11-05T16:05:24.432882651Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:05:24.433093 containerd[1979]: time="2025-11-05T16:05:24.433056069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:05:24.433706 kubelet[3273]: E1105 16:05:24.433305 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:05:24.433813 kubelet[3273]: E1105 16:05:24.433734 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:05:24.435575 kubelet[3273]: E1105 16:05:24.433896 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9pg2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6df446974d-wz89l_calico-apiserver(7a4ffcd2-c3d0-43ff-8d92-50435ddcecef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:24.436811 kubelet[3273]: E1105 16:05:24.436746 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df446974d-wz89l" podUID="7a4ffcd2-c3d0-43ff-8d92-50435ddcecef" Nov 5 16:05:25.111789 containerd[1979]: time="2025-11-05T16:05:25.111750023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 16:05:25.756703 containerd[1979]: time="2025-11-05T16:05:25.756652394Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:25.758962 containerd[1979]: time="2025-11-05T16:05:25.758825264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 16:05:25.758962 containerd[1979]: time="2025-11-05T16:05:25.758874547Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 16:05:25.759254 kubelet[3273]: E1105 16:05:25.759199 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:05:25.760099 kubelet[3273]: E1105 16:05:25.759276 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:05:25.760099 kubelet[3273]: E1105 16:05:25.759472 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z48dn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dsvvp_calico-system(6e11cbb7-6c81-460e-9d02-0e852cdd8f6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:25.763220 containerd[1979]: time="2025-11-05T16:05:25.763175386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 16:05:26.095915 kubelet[3273]: E1105 16:05:26.095450 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df446974d-5p6n9" podUID="97bb7728-1652-4f73-a3fd-5b00174bed72" Nov 5 16:05:26.103602 containerd[1979]: time="2025-11-05T16:05:26.103548580Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:26.106238 containerd[1979]: time="2025-11-05T16:05:26.105876201Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 16:05:26.106404 containerd[1979]: time="2025-11-05T16:05:26.105876272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 16:05:26.106579 kubelet[3273]: E1105 16:05:26.106523 3273 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:05:26.106676 kubelet[3273]: E1105 16:05:26.106588 3273 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:05:26.106823 kubelet[3273]: E1105 16:05:26.106736 3273 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z48dn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dsvvp_calico-system(6e11cbb7-6c81-460e-9d02-0e852cdd8f6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:26.108005 kubelet[3273]: E1105 16:05:26.107946 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dsvvp" podUID="6e11cbb7-6c81-460e-9d02-0e852cdd8f6c" Nov 5 16:05:28.059976 update_engine[1940]: I20251105 16:05:28.059901 1940 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 5 16:05:28.059976 update_engine[1940]: I20251105 16:05:28.059969 1940 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 5 16:05:28.062664 update_engine[1940]: I20251105 16:05:28.062602 1940 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 5 16:05:28.063292 update_engine[1940]: I20251105 16:05:28.063252 1940 omaha_request_params.cc:62] Current group set to alpha Nov 5 16:05:28.063626 update_engine[1940]: I20251105 16:05:28.063409 1940 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 5 16:05:28.063626 update_engine[1940]: I20251105 16:05:28.063493 1940 update_attempter.cc:643] Scheduling an action processor start. Nov 5 16:05:28.063626 update_engine[1940]: I20251105 16:05:28.063525 1940 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 5 16:05:28.063626 update_engine[1940]: I20251105 16:05:28.063582 1940 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 5 16:05:28.063902 update_engine[1940]: I20251105 16:05:28.063850 1940 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 5 16:05:28.063902 update_engine[1940]: I20251105 16:05:28.063870 1940 omaha_request_action.cc:272] Request: Nov 5 16:05:28.063902 update_engine[1940]: Nov 5 16:05:28.063902 update_engine[1940]: Nov 5 16:05:28.063902 update_engine[1940]: Nov 5 16:05:28.063902 update_engine[1940]: Nov 5 16:05:28.063902 update_engine[1940]: Nov 5 16:05:28.063902 update_engine[1940]: Nov 5 16:05:28.063902 update_engine[1940]: Nov 5 16:05:28.063902 update_engine[1940]: Nov 5 16:05:28.063902 update_engine[1940]: I20251105 16:05:28.063880 1940 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 5 16:05:28.097505 update_engine[1940]: I20251105 16:05:28.097445 1940 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 5 16:05:28.098209 update_engine[1940]: I20251105 16:05:28.098145 1940 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 5 16:05:28.103823 locksmithd[2030]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 5 16:05:28.121577 update_engine[1940]: E20251105 16:05:28.121433 1940 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 5 16:05:28.121577 update_engine[1940]: I20251105 16:05:28.121536 1940 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 5 16:05:30.097043 kubelet[3273]: E1105 16:05:30.096967 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c574dd99b-btm8k" podUID="5ed90129-d345-48e7-a043-180d8e15dcce" Nov 5 16:05:34.753187 systemd[1]: cri-containerd-32064864a6df86725c1619a9e55ee870728fe9b7983df8882ebf6703e2278493.scope: Deactivated successfully. Nov 5 16:05:34.754853 systemd[1]: cri-containerd-32064864a6df86725c1619a9e55ee870728fe9b7983df8882ebf6703e2278493.scope: Consumed 4.047s CPU time, 90.6M memory peak, 67.6M read from disk. Nov 5 16:05:34.791588 containerd[1979]: time="2025-11-05T16:05:34.791544299Z" level=info msg="received exit event container_id:\"32064864a6df86725c1619a9e55ee870728fe9b7983df8882ebf6703e2278493\" id:\"32064864a6df86725c1619a9e55ee870728fe9b7983df8882ebf6703e2278493\" pid:3115 exit_status:1 exited_at:{seconds:1762358734 nanos:791176710}" Nov 5 16:05:34.792083 containerd[1979]: time="2025-11-05T16:05:34.791854782Z" level=info msg="TaskExit event in podsandbox handler container_id:\"32064864a6df86725c1619a9e55ee870728fe9b7983df8882ebf6703e2278493\" id:\"32064864a6df86725c1619a9e55ee870728fe9b7983df8882ebf6703e2278493\" pid:3115 exit_status:1 exited_at:{seconds:1762358734 nanos:791176710}" Nov 5 16:05:34.967496 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32064864a6df86725c1619a9e55ee870728fe9b7983df8882ebf6703e2278493-rootfs.mount: Deactivated successfully. Nov 5 16:05:34.978156 systemd[1]: cri-containerd-573f575f1edd1148c3e628f48e2265126b1c817a363f2768150eaa3aa3bfe153.scope: Deactivated successfully. Nov 5 16:05:34.978549 systemd[1]: cri-containerd-573f575f1edd1148c3e628f48e2265126b1c817a363f2768150eaa3aa3bfe153.scope: Consumed 17.472s CPU time, 110.9M memory peak, 49.3M read from disk. Nov 5 16:05:34.980728 containerd[1979]: time="2025-11-05T16:05:34.980688962Z" level=info msg="received exit event container_id:\"573f575f1edd1148c3e628f48e2265126b1c817a363f2768150eaa3aa3bfe153\" id:\"573f575f1edd1148c3e628f48e2265126b1c817a363f2768150eaa3aa3bfe153\" pid:3649 exit_status:1 exited_at:{seconds:1762358734 nanos:977896226}" Nov 5 16:05:34.982253 containerd[1979]: time="2025-11-05T16:05:34.981898479Z" level=info msg="TaskExit event in podsandbox handler container_id:\"573f575f1edd1148c3e628f48e2265126b1c817a363f2768150eaa3aa3bfe153\" id:\"573f575f1edd1148c3e628f48e2265126b1c817a363f2768150eaa3aa3bfe153\" pid:3649 exit_status:1 exited_at:{seconds:1762358734 nanos:977896226}" Nov 5 16:05:35.050800 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-573f575f1edd1148c3e628f48e2265126b1c817a363f2768150eaa3aa3bfe153-rootfs.mount: Deactivated successfully. Nov 5 16:05:35.198618 kubelet[3273]: I1105 16:05:35.198549 3273 scope.go:117] "RemoveContainer" containerID="573f575f1edd1148c3e628f48e2265126b1c817a363f2768150eaa3aa3bfe153" Nov 5 16:05:35.263658 containerd[1979]: time="2025-11-05T16:05:35.263455693Z" level=info msg="CreateContainer within sandbox \"b7df47180fb81d2a1f30d61d25103584d2f014db46990f4d013f38c77a288d97\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 5 16:05:35.355216 containerd[1979]: time="2025-11-05T16:05:35.353176640Z" level=info msg="Container 1f59bb6bf999d3e2a7cf210d50e4b6bafff305d86bc7f95731f0ff102c38d16b: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:05:35.437452 containerd[1979]: time="2025-11-05T16:05:35.437399746Z" level=info msg="CreateContainer within sandbox \"b7df47180fb81d2a1f30d61d25103584d2f014db46990f4d013f38c77a288d97\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"1f59bb6bf999d3e2a7cf210d50e4b6bafff305d86bc7f95731f0ff102c38d16b\"" Nov 5 16:05:35.439208 containerd[1979]: time="2025-11-05T16:05:35.438178249Z" level=info msg="StartContainer for \"1f59bb6bf999d3e2a7cf210d50e4b6bafff305d86bc7f95731f0ff102c38d16b\"" Nov 5 16:05:35.439208 containerd[1979]: time="2025-11-05T16:05:35.439101039Z" level=info msg="connecting to shim 1f59bb6bf999d3e2a7cf210d50e4b6bafff305d86bc7f95731f0ff102c38d16b" address="unix:///run/containerd/s/adf57407fa3aaa60ec10fc839f04ff0a95c03275f30cadb0da32b913528910be" protocol=ttrpc version=3 Nov 5 16:05:35.467419 systemd[1]: Started cri-containerd-1f59bb6bf999d3e2a7cf210d50e4b6bafff305d86bc7f95731f0ff102c38d16b.scope - libcontainer container 1f59bb6bf999d3e2a7cf210d50e4b6bafff305d86bc7f95731f0ff102c38d16b. Nov 5 16:05:35.558858 containerd[1979]: time="2025-11-05T16:05:35.558813839Z" level=info msg="StartContainer for \"1f59bb6bf999d3e2a7cf210d50e4b6bafff305d86bc7f95731f0ff102c38d16b\" returns successfully" Nov 5 16:05:36.193277 kubelet[3273]: I1105 16:05:36.193240 3273 scope.go:117] "RemoveContainer" containerID="32064864a6df86725c1619a9e55ee870728fe9b7983df8882ebf6703e2278493" Nov 5 16:05:36.200361 containerd[1979]: time="2025-11-05T16:05:36.199347193Z" level=info msg="CreateContainer within sandbox \"ef0c0e053d8789d2cec19f8040844870d5a09c9400041602e5b7f3d0ac2323a8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 5 16:05:36.226655 containerd[1979]: time="2025-11-05T16:05:36.226595815Z" level=info msg="Container fabf8f6e0dc913b2fe7f61f9065c5476254cb36e959ce97b14e1079ea064a8ae: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:05:36.235985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3449419393.mount: Deactivated successfully. Nov 5 16:05:36.254285 containerd[1979]: time="2025-11-05T16:05:36.254237628Z" level=info msg="CreateContainer within sandbox \"ef0c0e053d8789d2cec19f8040844870d5a09c9400041602e5b7f3d0ac2323a8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"fabf8f6e0dc913b2fe7f61f9065c5476254cb36e959ce97b14e1079ea064a8ae\"" Nov 5 16:05:36.256447 containerd[1979]: time="2025-11-05T16:05:36.256226687Z" level=info msg="StartContainer for \"fabf8f6e0dc913b2fe7f61f9065c5476254cb36e959ce97b14e1079ea064a8ae\"" Nov 5 16:05:36.260300 containerd[1979]: time="2025-11-05T16:05:36.260239254Z" level=info msg="connecting to shim fabf8f6e0dc913b2fe7f61f9065c5476254cb36e959ce97b14e1079ea064a8ae" address="unix:///run/containerd/s/12ef528e9744596dcae4f9312f15e8ce38c9ca62a9292953c87a5cf605586f0f" protocol=ttrpc version=3 Nov 5 16:05:36.302282 systemd[1]: Started cri-containerd-fabf8f6e0dc913b2fe7f61f9065c5476254cb36e959ce97b14e1079ea064a8ae.scope - libcontainer container fabf8f6e0dc913b2fe7f61f9065c5476254cb36e959ce97b14e1079ea064a8ae. Nov 5 16:05:36.395096 containerd[1979]: time="2025-11-05T16:05:36.395040931Z" level=info msg="StartContainer for \"fabf8f6e0dc913b2fe7f61f9065c5476254cb36e959ce97b14e1079ea064a8ae\" returns successfully" Nov 5 16:05:37.098217 kubelet[3273]: E1105 16:05:37.097588 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df446974d-5p6n9" podUID="97bb7728-1652-4f73-a3fd-5b00174bed72" Nov 5 16:05:37.099214 kubelet[3273]: E1105 16:05:37.098730 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xbcp7" podUID="8831874b-2bb6-46c1-a079-c45a246f51e1" Nov 5 16:05:37.100211 kubelet[3273]: E1105 16:05:37.100158 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b589d999-5tfgh" podUID="9aac16aa-0990-4e14-a1db-e5abd9a92505" Nov 5 16:05:37.975828 update_engine[1940]: I20251105 16:05:37.975131 1940 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 5 16:05:37.975828 update_engine[1940]: I20251105 16:05:37.975245 1940 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 5 16:05:37.975828 update_engine[1940]: I20251105 16:05:37.975775 1940 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 5 16:05:37.977770 update_engine[1940]: E20251105 16:05:37.977726 1940 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 5 16:05:37.977981 update_engine[1940]: I20251105 16:05:37.977960 1940 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 5 16:05:38.096182 kubelet[3273]: E1105 16:05:38.096096 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df446974d-wz89l" podUID="7a4ffcd2-c3d0-43ff-8d92-50435ddcecef" Nov 5 16:05:38.358578 kubelet[3273]: E1105 16:05:38.358431 3273 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.172:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-172?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 5 16:05:39.459244 systemd[1]: cri-containerd-4bc0f58cb05d04b0c0b374e04b3e06da943c4e1f7035e5a68b8923985d517d05.scope: Deactivated successfully. Nov 5 16:05:39.460626 systemd[1]: cri-containerd-4bc0f58cb05d04b0c0b374e04b3e06da943c4e1f7035e5a68b8923985d517d05.scope: Consumed 2.805s CPU time, 43.4M memory peak, 34.8M read from disk. Nov 5 16:05:39.463725 containerd[1979]: time="2025-11-05T16:05:39.463612308Z" level=info msg="received exit event container_id:\"4bc0f58cb05d04b0c0b374e04b3e06da943c4e1f7035e5a68b8923985d517d05\" id:\"4bc0f58cb05d04b0c0b374e04b3e06da943c4e1f7035e5a68b8923985d517d05\" pid:3106 exit_status:1 exited_at:{seconds:1762358739 nanos:463251763}" Nov 5 16:05:39.464320 containerd[1979]: time="2025-11-05T16:05:39.464123504Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4bc0f58cb05d04b0c0b374e04b3e06da943c4e1f7035e5a68b8923985d517d05\" id:\"4bc0f58cb05d04b0c0b374e04b3e06da943c4e1f7035e5a68b8923985d517d05\" pid:3106 exit_status:1 exited_at:{seconds:1762358739 nanos:463251763}" Nov 5 16:05:39.492382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4bc0f58cb05d04b0c0b374e04b3e06da943c4e1f7035e5a68b8923985d517d05-rootfs.mount: Deactivated successfully. Nov 5 16:05:40.218672 kubelet[3273]: I1105 16:05:40.218637 3273 scope.go:117] "RemoveContainer" containerID="4bc0f58cb05d04b0c0b374e04b3e06da943c4e1f7035e5a68b8923985d517d05" Nov 5 16:05:40.221227 containerd[1979]: time="2025-11-05T16:05:40.221195654Z" level=info msg="CreateContainer within sandbox \"a8910b93862583903af20aefe5347d34e50d3d751af5b3fb05ed657155899b9c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 5 16:05:40.256052 containerd[1979]: time="2025-11-05T16:05:40.255886451Z" level=info msg="Container ba5356f919048f55a1f2ecfb7eb308fdcb3bfc209a870ddced17ad5111f31ccd: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:05:40.264659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount232159818.mount: Deactivated successfully. Nov 5 16:05:40.272859 containerd[1979]: time="2025-11-05T16:05:40.272801116Z" level=info msg="CreateContainer within sandbox \"a8910b93862583903af20aefe5347d34e50d3d751af5b3fb05ed657155899b9c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ba5356f919048f55a1f2ecfb7eb308fdcb3bfc209a870ddced17ad5111f31ccd\"" Nov 5 16:05:40.273523 containerd[1979]: time="2025-11-05T16:05:40.273492867Z" level=info msg="StartContainer for \"ba5356f919048f55a1f2ecfb7eb308fdcb3bfc209a870ddced17ad5111f31ccd\"" Nov 5 16:05:40.274882 containerd[1979]: time="2025-11-05T16:05:40.274842165Z" level=info msg="connecting to shim ba5356f919048f55a1f2ecfb7eb308fdcb3bfc209a870ddced17ad5111f31ccd" address="unix:///run/containerd/s/80308aa1adb71c4326be0a4c15ec2c02c9cf14c5423d27bdcaae06382b2328e5" protocol=ttrpc version=3 Nov 5 16:05:40.302250 systemd[1]: Started cri-containerd-ba5356f919048f55a1f2ecfb7eb308fdcb3bfc209a870ddced17ad5111f31ccd.scope - libcontainer container ba5356f919048f55a1f2ecfb7eb308fdcb3bfc209a870ddced17ad5111f31ccd. Nov 5 16:05:40.382778 containerd[1979]: time="2025-11-05T16:05:40.382481375Z" level=info msg="StartContainer for \"ba5356f919048f55a1f2ecfb7eb308fdcb3bfc209a870ddced17ad5111f31ccd\" returns successfully" Nov 5 16:05:41.096881 kubelet[3273]: E1105 16:05:41.096780 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dsvvp" podUID="6e11cbb7-6c81-460e-9d02-0e852cdd8f6c" Nov 5 16:05:42.087216 containerd[1979]: time="2025-11-05T16:05:42.087157972Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e97dcc3fd4ab92721f81610c69d03ab92a58fe651e3a44a8865f3f1df18ca6c\" id:\"c973b287e183a6cb750a728f937af9bcb64892824e2cfecf760af4ab2a42c7e6\" pid:6022 exited_at:{seconds:1762358742 nanos:86422071}" Nov 5 16:05:44.097219 kubelet[3273]: E1105 16:05:44.097149 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c574dd99b-btm8k" podUID="5ed90129-d345-48e7-a043-180d8e15dcce" Nov 5 16:05:47.977690 update_engine[1940]: I20251105 16:05:47.977604 1940 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 5 16:05:47.978147 update_engine[1940]: I20251105 16:05:47.977712 1940 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 5 16:05:47.978147 update_engine[1940]: I20251105 16:05:47.978101 1940 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 5 16:05:47.979768 update_engine[1940]: E20251105 16:05:47.979707 1940 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 5 16:05:47.979879 update_engine[1940]: I20251105 16:05:47.979799 1940 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 5 16:05:48.199433 systemd[1]: cri-containerd-1f59bb6bf999d3e2a7cf210d50e4b6bafff305d86bc7f95731f0ff102c38d16b.scope: Deactivated successfully. Nov 5 16:05:48.199815 systemd[1]: cri-containerd-1f59bb6bf999d3e2a7cf210d50e4b6bafff305d86bc7f95731f0ff102c38d16b.scope: Consumed 393ms CPU time, 66M memory peak, 33.7M read from disk. Nov 5 16:05:48.205064 containerd[1979]: time="2025-11-05T16:05:48.205000583Z" level=info msg="received exit event container_id:\"1f59bb6bf999d3e2a7cf210d50e4b6bafff305d86bc7f95731f0ff102c38d16b\" id:\"1f59bb6bf999d3e2a7cf210d50e4b6bafff305d86bc7f95731f0ff102c38d16b\" pid:5917 exit_status:1 exited_at:{seconds:1762358748 nanos:204744958}" Nov 5 16:05:48.205441 containerd[1979]: time="2025-11-05T16:05:48.205191032Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f59bb6bf999d3e2a7cf210d50e4b6bafff305d86bc7f95731f0ff102c38d16b\" id:\"1f59bb6bf999d3e2a7cf210d50e4b6bafff305d86bc7f95731f0ff102c38d16b\" pid:5917 exit_status:1 exited_at:{seconds:1762358748 nanos:204744958}" Nov 5 16:05:48.230479 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f59bb6bf999d3e2a7cf210d50e4b6bafff305d86bc7f95731f0ff102c38d16b-rootfs.mount: Deactivated successfully. Nov 5 16:05:48.359492 kubelet[3273]: E1105 16:05:48.359438 3273 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.172:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-172?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 5 16:05:49.096679 kubelet[3273]: E1105 16:05:49.096357 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74b589d999-5tfgh" podUID="9aac16aa-0990-4e14-a1db-e5abd9a92505" Nov 5 16:05:49.261653 kubelet[3273]: I1105 16:05:49.261597 3273 scope.go:117] "RemoveContainer" containerID="573f575f1edd1148c3e628f48e2265126b1c817a363f2768150eaa3aa3bfe153" Nov 5 16:05:49.261912 kubelet[3273]: I1105 16:05:49.261892 3273 scope.go:117] "RemoveContainer" containerID="1f59bb6bf999d3e2a7cf210d50e4b6bafff305d86bc7f95731f0ff102c38d16b" Nov 5 16:05:49.262246 kubelet[3273]: E1105 16:05:49.262208 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-wn8cn_tigera-operator(a991831b-f923-448f-b89c-0cac151ec620)\"" pod="tigera-operator/tigera-operator-7dcd859c48-wn8cn" podUID="a991831b-f923-448f-b89c-0cac151ec620" Nov 5 16:05:49.386521 containerd[1979]: time="2025-11-05T16:05:49.386468891Z" level=info msg="RemoveContainer for \"573f575f1edd1148c3e628f48e2265126b1c817a363f2768150eaa3aa3bfe153\"" Nov 5 16:05:49.481174 containerd[1979]: time="2025-11-05T16:05:49.481099680Z" level=info msg="RemoveContainer for \"573f575f1edd1148c3e628f48e2265126b1c817a363f2768150eaa3aa3bfe153\" returns successfully" Nov 5 16:05:51.097092 kubelet[3273]: E1105 16:05:51.097047 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xbcp7" podUID="8831874b-2bb6-46c1-a079-c45a246f51e1" Nov 5 16:05:51.097602 kubelet[3273]: E1105 16:05:51.097148 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df446974d-5p6n9" podUID="97bb7728-1652-4f73-a3fd-5b00174bed72" Nov 5 16:05:51.097602 kubelet[3273]: E1105 16:05:51.097563 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6df446974d-wz89l" podUID="7a4ffcd2-c3d0-43ff-8d92-50435ddcecef" Nov 5 16:05:52.095694 kubelet[3273]: E1105 16:05:52.095642 3273 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dsvvp" podUID="6e11cbb7-6c81-460e-9d02-0e852cdd8f6c"