Nov 8 00:24:55.934420 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:24:55.934455 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:24:55.934474 kernel: BIOS-provided physical RAM map: Nov 8 00:24:55.934485 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 8 00:24:55.934496 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Nov 8 00:24:55.934507 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Nov 8 00:24:55.934520 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Nov 8 00:24:55.934532 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Nov 8 00:24:55.934544 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Nov 8 00:24:55.934558 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Nov 8 00:24:55.934570 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Nov 8 00:24:55.934581 kernel: NX (Execute Disable) protection: active Nov 8 00:24:55.934593 kernel: APIC: Static calls initialized Nov 8 00:24:55.934604 kernel: efi: EFI v2.7 by EDK II Nov 8 00:24:55.934619 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77002518 Nov 8 00:24:55.934635 kernel: SMBIOS 2.7 present. Nov 8 00:24:55.934648 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Nov 8 00:24:55.934660 kernel: Hypervisor detected: KVM Nov 8 00:24:55.934673 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:24:55.934685 kernel: kvm-clock: using sched offset of 3828532161 cycles Nov 8 00:24:55.934699 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:24:55.934712 kernel: tsc: Detected 2499.996 MHz processor Nov 8 00:24:55.934725 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:24:55.934739 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:24:55.934752 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Nov 8 00:24:55.934768 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 8 00:24:55.934781 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:24:55.934794 kernel: Using GB pages for direct mapping Nov 8 00:24:55.934806 kernel: Secure boot disabled Nov 8 00:24:55.934819 kernel: ACPI: Early table checksum verification disabled Nov 8 00:24:55.934833 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Nov 8 00:24:55.934846 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Nov 8 00:24:55.934859 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 8 00:24:55.934872 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Nov 8 00:24:55.934887 kernel: ACPI: FACS 0x00000000789D0000 000040 Nov 8 00:24:55.934901 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Nov 8 00:24:55.934914 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 8 00:24:55.934926 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 8 00:24:55.934939 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Nov 8 00:24:55.934953 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Nov 8 00:24:55.934972 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 8 00:24:55.934989 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 8 00:24:55.935003 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Nov 8 00:24:55.935017 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Nov 8 00:24:55.935030 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Nov 8 00:24:55.935044 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Nov 8 00:24:55.935058 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Nov 8 00:24:55.935074 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Nov 8 00:24:55.935087 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Nov 8 00:24:55.935101 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Nov 8 00:24:55.935127 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Nov 8 00:24:55.935142 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Nov 8 00:24:55.935155 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Nov 8 00:24:55.935169 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Nov 8 00:24:55.935182 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 8 00:24:55.935196 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 8 00:24:55.935209 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Nov 8 00:24:55.935226 kernel: NUMA: Initialized distance table, cnt=1 Nov 8 00:24:55.935240 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Nov 8 00:24:55.935253 kernel: Zone ranges: Nov 8 00:24:55.935268 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:24:55.935281 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Nov 8 00:24:55.935295 kernel: Normal empty Nov 8 00:24:55.935309 kernel: Movable zone start for each node Nov 8 00:24:55.935322 kernel: Early memory node ranges Nov 8 00:24:55.935336 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 8 00:24:55.935352 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Nov 8 00:24:55.935365 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Nov 8 00:24:55.935379 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Nov 8 00:24:55.935394 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:24:55.935407 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 8 00:24:55.935422 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 8 00:24:55.935436 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Nov 8 00:24:55.935450 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 8 00:24:55.935463 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:24:55.935481 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Nov 8 00:24:55.935495 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:24:55.935509 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:24:55.935522 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:24:55.935536 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:24:55.935550 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:24:55.935564 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:24:55.935594 kernel: TSC deadline timer available Nov 8 00:24:55.935607 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:24:55.935626 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:24:55.935641 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Nov 8 00:24:55.935656 kernel: Booting paravirtualized kernel on KVM Nov 8 00:24:55.935671 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:24:55.935686 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:24:55.935702 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:24:55.935717 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:24:55.935731 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:24:55.935746 kernel: kvm-guest: PV spinlocks enabled Nov 8 00:24:55.935761 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:24:55.935781 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:24:55.935797 kernel: random: crng init done Nov 8 00:24:55.935812 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:24:55.935827 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:24:55.935842 kernel: Fallback order for Node 0: 0 Nov 8 00:24:55.935857 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Nov 8 00:24:55.935872 kernel: Policy zone: DMA32 Nov 8 00:24:55.935887 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:24:55.935906 kernel: Memory: 1874600K/2037804K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 162944K reserved, 0K cma-reserved) Nov 8 00:24:55.935922 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:24:55.935936 kernel: Kernel/User page tables isolation: enabled Nov 8 00:24:55.935952 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:24:55.935967 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:24:55.935982 kernel: Dynamic Preempt: voluntary Nov 8 00:24:55.935997 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:24:55.936013 kernel: rcu: RCU event tracing is enabled. Nov 8 00:24:55.936028 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:24:55.936047 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:24:55.936062 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:24:55.936077 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:24:55.936092 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:24:55.936107 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:24:55.936134 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 8 00:24:55.936149 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:24:55.936179 kernel: Console: colour dummy device 80x25 Nov 8 00:24:55.936195 kernel: printk: console [tty0] enabled Nov 8 00:24:55.936211 kernel: printk: console [ttyS0] enabled Nov 8 00:24:55.936227 kernel: ACPI: Core revision 20230628 Nov 8 00:24:55.936246 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Nov 8 00:24:55.936263 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:24:55.936278 kernel: x2apic enabled Nov 8 00:24:55.936294 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:24:55.936311 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Nov 8 00:24:55.936330 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Nov 8 00:24:55.936345 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 8 00:24:55.936360 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 8 00:24:55.936377 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:24:55.936393 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:24:55.936409 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:24:55.936425 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 8 00:24:55.936442 kernel: RETBleed: Vulnerable Nov 8 00:24:55.936456 kernel: Speculative Store Bypass: Vulnerable Nov 8 00:24:55.936470 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:24:55.936490 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:24:55.936504 kernel: GDS: Unknown: Dependent on hypervisor status Nov 8 00:24:55.936518 kernel: active return thunk: its_return_thunk Nov 8 00:24:55.936532 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:24:55.936547 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:24:55.936562 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:24:55.936577 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:24:55.936592 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 8 00:24:55.936607 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 8 00:24:55.936622 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 8 00:24:55.936637 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 8 00:24:55.936656 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 8 00:24:55.936672 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 8 00:24:55.936688 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:24:55.936705 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 8 00:24:55.936720 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 8 00:24:55.936735 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Nov 8 00:24:55.936751 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Nov 8 00:24:55.936767 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Nov 8 00:24:55.936783 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Nov 8 00:24:55.936807 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Nov 8 00:24:55.936823 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:24:55.936839 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:24:55.936859 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:24:55.936875 kernel: landlock: Up and running. Nov 8 00:24:55.936891 kernel: SELinux: Initializing. Nov 8 00:24:55.936907 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:24:55.936924 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:24:55.936941 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 8 00:24:55.936958 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:24:55.936975 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:24:55.936992 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:24:55.937009 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 8 00:24:55.937029 kernel: signal: max sigframe size: 3632 Nov 8 00:24:55.937046 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:24:55.937063 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:24:55.937080 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 8 00:24:55.937097 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:24:55.937114 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:24:55.937164 kernel: .... node #0, CPUs: #1 Nov 8 00:24:55.937181 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 8 00:24:55.937198 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 8 00:24:55.937218 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:24:55.937234 kernel: smpboot: Max logical packages: 1 Nov 8 00:24:55.937250 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Nov 8 00:24:55.937266 kernel: devtmpfs: initialized Nov 8 00:24:55.937282 kernel: x86/mm: Memory block size: 128MB Nov 8 00:24:55.937298 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Nov 8 00:24:55.937314 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:24:55.937330 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:24:55.937348 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:24:55.937364 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:24:55.937380 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:24:55.937395 kernel: audit: type=2000 audit(1762561494.739:1): state=initialized audit_enabled=0 res=1 Nov 8 00:24:55.937410 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:24:55.937426 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:24:55.937442 kernel: cpuidle: using governor menu Nov 8 00:24:55.937458 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:24:55.937473 kernel: dca service started, version 1.12.1 Nov 8 00:24:55.937491 kernel: PCI: Using configuration type 1 for base access Nov 8 00:24:55.937507 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:24:55.937523 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:24:55.937538 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:24:55.937554 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:24:55.937570 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:24:55.937586 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:24:55.937601 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:24:55.937617 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:24:55.937636 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 8 00:24:55.937652 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:24:55.937667 kernel: ACPI: Interpreter enabled Nov 8 00:24:55.937683 kernel: ACPI: PM: (supports S0 S5) Nov 8 00:24:55.937698 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:24:55.937714 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:24:55.937729 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:24:55.937745 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 8 00:24:55.937761 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:24:55.937988 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:24:55.938149 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 8 00:24:55.938280 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 8 00:24:55.938299 kernel: acpiphp: Slot [3] registered Nov 8 00:24:55.938315 kernel: acpiphp: Slot [4] registered Nov 8 00:24:55.938330 kernel: acpiphp: Slot [5] registered Nov 8 00:24:55.938346 kernel: acpiphp: Slot [6] registered Nov 8 00:24:55.938362 kernel: acpiphp: Slot [7] registered Nov 8 00:24:55.938381 kernel: acpiphp: Slot [8] registered Nov 8 00:24:55.938396 kernel: acpiphp: Slot [9] registered Nov 8 00:24:55.938412 kernel: acpiphp: Slot [10] registered Nov 8 00:24:55.938427 kernel: acpiphp: Slot [11] registered Nov 8 00:24:55.938443 kernel: acpiphp: Slot [12] registered Nov 8 00:24:55.938458 kernel: acpiphp: Slot [13] registered Nov 8 00:24:55.938474 kernel: acpiphp: Slot [14] registered Nov 8 00:24:55.938489 kernel: acpiphp: Slot [15] registered Nov 8 00:24:55.938504 kernel: acpiphp: Slot [16] registered Nov 8 00:24:55.938523 kernel: acpiphp: Slot [17] registered Nov 8 00:24:55.938539 kernel: acpiphp: Slot [18] registered Nov 8 00:24:55.938555 kernel: acpiphp: Slot [19] registered Nov 8 00:24:55.938570 kernel: acpiphp: Slot [20] registered Nov 8 00:24:55.938586 kernel: acpiphp: Slot [21] registered Nov 8 00:24:55.938602 kernel: acpiphp: Slot [22] registered Nov 8 00:24:55.938618 kernel: acpiphp: Slot [23] registered Nov 8 00:24:55.938639 kernel: acpiphp: Slot [24] registered Nov 8 00:24:55.938655 kernel: acpiphp: Slot [25] registered Nov 8 00:24:55.938684 kernel: acpiphp: Slot [26] registered Nov 8 00:24:55.938711 kernel: acpiphp: Slot [27] registered Nov 8 00:24:55.938728 kernel: acpiphp: Slot [28] registered Nov 8 00:24:55.938745 kernel: acpiphp: Slot [29] registered Nov 8 00:24:55.938762 kernel: acpiphp: Slot [30] registered Nov 8 00:24:55.938778 kernel: acpiphp: Slot [31] registered Nov 8 00:24:55.938795 kernel: PCI host bridge to bus 0000:00 Nov 8 00:24:55.938938 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:24:55.939064 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:24:55.939776 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:24:55.939912 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 8 00:24:55.940035 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Nov 8 00:24:55.940185 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:24:55.940345 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 8 00:24:55.940493 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 8 00:24:55.940646 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Nov 8 00:24:55.940782 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 8 00:24:55.940942 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Nov 8 00:24:55.941083 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Nov 8 00:24:55.941270 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Nov 8 00:24:55.941414 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Nov 8 00:24:55.941584 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Nov 8 00:24:55.941730 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Nov 8 00:24:55.941878 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Nov 8 00:24:55.942019 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Nov 8 00:24:55.942226 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Nov 8 00:24:55.942378 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Nov 8 00:24:55.942519 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:24:55.942671 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Nov 8 00:24:55.942818 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Nov 8 00:24:55.942963 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Nov 8 00:24:55.943096 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Nov 8 00:24:55.943128 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:24:55.943145 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:24:55.943160 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:24:55.943175 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:24:55.943191 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 8 00:24:55.943211 kernel: iommu: Default domain type: Translated Nov 8 00:24:55.943227 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:24:55.943242 kernel: efivars: Registered efivars operations Nov 8 00:24:55.943257 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:24:55.943272 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:24:55.943287 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Nov 8 00:24:55.943302 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Nov 8 00:24:55.943435 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Nov 8 00:24:55.943573 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Nov 8 00:24:55.943707 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:24:55.943726 kernel: vgaarb: loaded Nov 8 00:24:55.943741 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Nov 8 00:24:55.943757 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Nov 8 00:24:55.943772 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:24:55.943787 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:24:55.943802 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:24:55.943817 kernel: pnp: PnP ACPI init Nov 8 00:24:55.943836 kernel: pnp: PnP ACPI: found 5 devices Nov 8 00:24:55.943851 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:24:55.943866 kernel: NET: Registered PF_INET protocol family Nov 8 00:24:55.943882 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:24:55.943898 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 8 00:24:55.943912 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:24:55.943927 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:24:55.943943 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 00:24:55.943958 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 8 00:24:55.943976 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:24:55.943991 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:24:55.944006 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:24:55.944021 kernel: NET: Registered PF_XDP protocol family Nov 8 00:24:55.944160 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:24:55.944282 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:24:55.944401 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:24:55.944519 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 8 00:24:55.944662 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Nov 8 00:24:55.944818 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 8 00:24:55.944842 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:24:55.944857 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 8 00:24:55.944871 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Nov 8 00:24:55.944886 kernel: clocksource: Switched to clocksource tsc Nov 8 00:24:55.944901 kernel: Initialise system trusted keyrings Nov 8 00:24:55.944916 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 8 00:24:55.944933 kernel: Key type asymmetric registered Nov 8 00:24:55.944954 kernel: Asymmetric key parser 'x509' registered Nov 8 00:24:55.944971 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:24:55.944987 kernel: io scheduler mq-deadline registered Nov 8 00:24:55.945002 kernel: io scheduler kyber registered Nov 8 00:24:55.945016 kernel: io scheduler bfq registered Nov 8 00:24:55.945030 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:24:55.945045 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:24:55.945059 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:24:55.945077 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:24:55.945096 kernel: i8042: Warning: Keylock active Nov 8 00:24:55.945112 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:24:55.945154 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:24:55.945358 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 8 00:24:55.945522 kernel: rtc_cmos 00:00: registered as rtc0 Nov 8 00:24:55.945653 kernel: rtc_cmos 00:00: setting system clock to 2025-11-08T00:24:55 UTC (1762561495) Nov 8 00:24:55.945786 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 8 00:24:55.945812 kernel: intel_pstate: CPU model not supported Nov 8 00:24:55.945830 kernel: efifb: probing for efifb Nov 8 00:24:55.945848 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Nov 8 00:24:55.945865 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Nov 8 00:24:55.945883 kernel: efifb: scrolling: redraw Nov 8 00:24:55.945900 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 8 00:24:55.945918 kernel: Console: switching to colour frame buffer device 100x37 Nov 8 00:24:55.945935 kernel: fb0: EFI VGA frame buffer device Nov 8 00:24:55.945953 kernel: pstore: Using crash dump compression: deflate Nov 8 00:24:55.945970 kernel: pstore: Registered efi_pstore as persistent store backend Nov 8 00:24:55.945991 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:24:55.946008 kernel: Segment Routing with IPv6 Nov 8 00:24:55.946025 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:24:55.946042 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:24:55.946059 kernel: Key type dns_resolver registered Nov 8 00:24:55.946077 kernel: IPI shorthand broadcast: enabled Nov 8 00:24:55.946173 kernel: sched_clock: Marking stable (463001905, 126927392)->(674599467, -84670170) Nov 8 00:24:55.946194 kernel: registered taskstats version 1 Nov 8 00:24:55.946213 kernel: Loading compiled-in X.509 certificates Nov 8 00:24:55.946234 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:24:55.946252 kernel: Key type .fscrypt registered Nov 8 00:24:55.946269 kernel: Key type fscrypt-provisioning registered Nov 8 00:24:55.946287 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:24:55.946305 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:24:55.946323 kernel: ima: No architecture policies found Nov 8 00:24:55.946341 kernel: clk: Disabling unused clocks Nov 8 00:24:55.946359 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:24:55.946375 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:24:55.946394 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:24:55.946411 kernel: Run /init as init process Nov 8 00:24:55.946427 kernel: with arguments: Nov 8 00:24:55.946444 kernel: /init Nov 8 00:24:55.946461 kernel: with environment: Nov 8 00:24:55.946478 kernel: HOME=/ Nov 8 00:24:55.946495 kernel: TERM=linux Nov 8 00:24:55.946515 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:24:55.946539 systemd[1]: Detected virtualization amazon. Nov 8 00:24:55.946557 systemd[1]: Detected architecture x86-64. Nov 8 00:24:55.946575 systemd[1]: Running in initrd. Nov 8 00:24:55.946592 systemd[1]: No hostname configured, using default hostname. Nov 8 00:24:55.946609 systemd[1]: Hostname set to . Nov 8 00:24:55.946628 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:24:55.946645 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:24:55.946663 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:24:55.946684 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:24:55.946703 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:24:55.946721 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:24:55.946740 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:24:55.946761 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:24:55.946784 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:24:55.946802 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:24:55.946820 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:24:55.946839 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:24:55.946856 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:24:55.946875 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:24:55.946893 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:24:55.946914 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:24:55.946932 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:24:55.946950 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:24:55.946968 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:24:55.946986 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:24:55.947004 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:24:55.947022 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:24:55.947040 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:24:55.947061 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:24:55.947079 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:24:55.947097 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:24:55.947126 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:24:55.947151 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:24:55.947167 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:24:55.947185 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:24:55.947203 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:24:55.947250 systemd-journald[178]: Collecting audit messages is disabled. Nov 8 00:24:55.947292 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:24:55.947327 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:24:55.947347 systemd-journald[178]: Journal started Nov 8 00:24:55.947384 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2ffc0e3ae90fdfcdd196095f4ca18d) is 4.7M, max 38.2M, 33.4M free. Nov 8 00:24:55.945566 systemd-modules-load[179]: Inserted module 'overlay' Nov 8 00:24:55.955282 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:24:55.958245 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:24:55.975428 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:24:55.984382 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:24:55.988224 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:24:55.993603 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:24:55.996259 kernel: Bridge firewalling registered Nov 8 00:24:55.995482 systemd-modules-load[179]: Inserted module 'br_netfilter' Nov 8 00:24:56.003534 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:24:56.005606 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:24:56.004388 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:24:56.013314 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:24:56.015246 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:24:56.017282 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:24:56.017881 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:24:56.034022 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:24:56.036088 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:24:56.037607 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:24:56.040391 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:24:56.042261 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:24:56.056101 dracut-cmdline[211]: dracut-dracut-053 Nov 8 00:24:56.061181 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:24:56.092973 systemd-resolved[213]: Positive Trust Anchors: Nov 8 00:24:56.093998 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:24:56.094065 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:24:56.101695 systemd-resolved[213]: Defaulting to hostname 'linux'. Nov 8 00:24:56.104278 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:24:56.105330 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:24:56.151159 kernel: SCSI subsystem initialized Nov 8 00:24:56.161146 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:24:56.173154 kernel: iscsi: registered transport (tcp) Nov 8 00:24:56.194348 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:24:56.194428 kernel: QLogic iSCSI HBA Driver Nov 8 00:24:56.233706 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:24:56.238332 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:24:56.265160 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:24:56.265237 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:24:56.267890 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:24:56.309164 kernel: raid6: avx512x4 gen() 18205 MB/s Nov 8 00:24:56.327145 kernel: raid6: avx512x2 gen() 18131 MB/s Nov 8 00:24:56.345150 kernel: raid6: avx512x1 gen() 18096 MB/s Nov 8 00:24:56.363146 kernel: raid6: avx2x4 gen() 17958 MB/s Nov 8 00:24:56.381146 kernel: raid6: avx2x2 gen() 17995 MB/s Nov 8 00:24:56.399328 kernel: raid6: avx2x1 gen() 13735 MB/s Nov 8 00:24:56.399374 kernel: raid6: using algorithm avx512x4 gen() 18205 MB/s Nov 8 00:24:56.418342 kernel: raid6: .... xor() 7455 MB/s, rmw enabled Nov 8 00:24:56.418386 kernel: raid6: using avx512x2 recovery algorithm Nov 8 00:24:56.440161 kernel: xor: automatically using best checksumming function avx Nov 8 00:24:56.606155 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:24:56.616466 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:24:56.621336 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:24:56.637137 systemd-udevd[396]: Using default interface naming scheme 'v255'. Nov 8 00:24:56.642188 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:24:56.651379 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:24:56.687578 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Nov 8 00:24:56.719725 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:24:56.727356 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:24:56.779246 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:24:56.786395 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:24:56.813593 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:24:56.816411 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:24:56.818223 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:24:56.819223 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:24:56.827409 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:24:56.858392 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:24:56.893149 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:24:56.899452 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:24:56.899628 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:24:56.918967 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 8 00:24:56.919257 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 8 00:24:56.919455 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Nov 8 00:24:56.919636 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:24:56.919661 kernel: AES CTR mode by8 optimization enabled Nov 8 00:24:56.918830 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:24:56.923206 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:24:56.923465 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:24:56.924105 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:24:56.942278 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:dc:62:6e:27:99 Nov 8 00:24:56.938847 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:24:56.952032 (udev-worker)[456]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:24:56.954585 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:24:56.954717 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:24:56.965179 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 8 00:24:56.968696 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 8 00:24:56.969454 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:24:56.981136 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 8 00:24:56.984904 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:24:56.987143 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:24:56.987196 kernel: GPT:9289727 != 33554431 Nov 8 00:24:56.987218 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:24:56.987238 kernel: GPT:9289727 != 33554431 Nov 8 00:24:56.987256 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:24:56.987276 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:24:57.000308 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:24:57.018427 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:24:57.064145 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (449) Nov 8 00:24:57.071171 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/nvme0n1p3 scanned by (udev-worker) (459) Nov 8 00:24:57.114790 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 8 00:24:57.124760 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 8 00:24:57.136552 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 8 00:24:57.142923 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Nov 8 00:24:57.143536 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 8 00:24:57.151382 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:24:57.157883 disk-uuid[633]: Primary Header is updated. Nov 8 00:24:57.157883 disk-uuid[633]: Secondary Entries is updated. Nov 8 00:24:57.157883 disk-uuid[633]: Secondary Header is updated. Nov 8 00:24:57.162174 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:24:57.168160 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:24:58.175626 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:24:58.175685 disk-uuid[634]: The operation has completed successfully. Nov 8 00:24:58.314536 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:24:58.314668 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:24:58.336302 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:24:58.341663 sh[979]: Success Nov 8 00:24:58.362158 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 00:24:58.469705 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:24:58.478250 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:24:58.479551 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:24:58.516692 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:24:58.516770 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:24:58.516785 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:24:58.520070 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:24:58.520152 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:24:58.642145 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:24:58.654220 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:24:58.655625 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:24:58.661314 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:24:58.664326 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:24:58.683233 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:24:58.683277 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:24:58.685795 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 8 00:24:58.699357 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 8 00:24:58.713168 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:24:58.713342 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:24:58.722189 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:24:58.730368 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:24:58.775238 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:24:58.781352 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:24:58.805418 systemd-networkd[1171]: lo: Link UP Nov 8 00:24:58.805430 systemd-networkd[1171]: lo: Gained carrier Nov 8 00:24:58.807152 systemd-networkd[1171]: Enumeration completed Nov 8 00:24:58.807620 systemd-networkd[1171]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:24:58.807626 systemd-networkd[1171]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:24:58.808656 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:24:58.810223 systemd[1]: Reached target network.target - Network. Nov 8 00:24:58.817503 systemd-networkd[1171]: eth0: Link UP Nov 8 00:24:58.817512 systemd-networkd[1171]: eth0: Gained carrier Nov 8 00:24:58.817528 systemd-networkd[1171]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:24:58.833563 systemd-networkd[1171]: eth0: DHCPv4 address 172.31.25.121/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 8 00:24:59.069776 ignition[1100]: Ignition 2.19.0 Nov 8 00:24:59.069789 ignition[1100]: Stage: fetch-offline Nov 8 00:24:59.070055 ignition[1100]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:24:59.070068 ignition[1100]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:24:59.070430 ignition[1100]: Ignition finished successfully Nov 8 00:24:59.072635 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:24:59.078336 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:24:59.093807 ignition[1180]: Ignition 2.19.0 Nov 8 00:24:59.093821 ignition[1180]: Stage: fetch Nov 8 00:24:59.094301 ignition[1180]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:24:59.094316 ignition[1180]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:24:59.094434 ignition[1180]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:24:59.102323 ignition[1180]: PUT result: OK Nov 8 00:24:59.103835 ignition[1180]: parsed url from cmdline: "" Nov 8 00:24:59.103845 ignition[1180]: no config URL provided Nov 8 00:24:59.103855 ignition[1180]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:24:59.103890 ignition[1180]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:24:59.103913 ignition[1180]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:24:59.104493 ignition[1180]: PUT result: OK Nov 8 00:24:59.105071 ignition[1180]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 8 00:24:59.105758 ignition[1180]: GET result: OK Nov 8 00:24:59.105842 ignition[1180]: parsing config with SHA512: 8865849230ed2a18b1486037f6bc5f3ae2e9cce6412fb35db7a55e31c3e3feb6516344c010633ca2b394210e5c86cb4f3d6e8ef4dfb383806e06e78d33b0c6a8 Nov 8 00:24:59.111775 unknown[1180]: fetched base config from "system" Nov 8 00:24:59.111800 unknown[1180]: fetched base config from "system" Nov 8 00:24:59.113227 ignition[1180]: fetch: fetch complete Nov 8 00:24:59.111812 unknown[1180]: fetched user config from "aws" Nov 8 00:24:59.113236 ignition[1180]: fetch: fetch passed Nov 8 00:24:59.113319 ignition[1180]: Ignition finished successfully Nov 8 00:24:59.116080 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:24:59.124428 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:24:59.140559 ignition[1186]: Ignition 2.19.0 Nov 8 00:24:59.140573 ignition[1186]: Stage: kargs Nov 8 00:24:59.141186 ignition[1186]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:24:59.141202 ignition[1186]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:24:59.141333 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:24:59.142161 ignition[1186]: PUT result: OK Nov 8 00:24:59.144993 ignition[1186]: kargs: kargs passed Nov 8 00:24:59.145086 ignition[1186]: Ignition finished successfully Nov 8 00:24:59.146919 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:24:59.151345 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:24:59.167276 ignition[1192]: Ignition 2.19.0 Nov 8 00:24:59.167290 ignition[1192]: Stage: disks Nov 8 00:24:59.167764 ignition[1192]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:24:59.167779 ignition[1192]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:24:59.167898 ignition[1192]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:24:59.168784 ignition[1192]: PUT result: OK Nov 8 00:24:59.171631 ignition[1192]: disks: disks passed Nov 8 00:24:59.171705 ignition[1192]: Ignition finished successfully Nov 8 00:24:59.173584 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:24:59.174249 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:24:59.174619 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:24:59.175187 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:24:59.175757 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:24:59.176344 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:24:59.182355 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:24:59.209377 systemd-fsck[1200]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:24:59.212244 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:24:59.217238 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:24:59.315142 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:24:59.315761 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:24:59.317326 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:24:59.332274 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:24:59.335398 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:24:59.337433 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:24:59.337552 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:24:59.337591 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:24:59.349188 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:24:59.352172 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1219) Nov 8 00:24:59.358143 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:24:59.358198 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:24:59.358212 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 8 00:24:59.356275 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:24:59.368131 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 8 00:24:59.369901 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:24:59.657166 initrd-setup-root[1244]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:24:59.671425 initrd-setup-root[1251]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:24:59.676110 initrd-setup-root[1258]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:24:59.681332 initrd-setup-root[1265]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:24:59.939133 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:24:59.942296 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:24:59.946311 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:24:59.956521 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:24:59.958652 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:24:59.988489 systemd-networkd[1171]: eth0: Gained IPv6LL Nov 8 00:24:59.991528 ignition[1333]: INFO : Ignition 2.19.0 Nov 8 00:24:59.993845 ignition[1333]: INFO : Stage: mount Nov 8 00:24:59.993845 ignition[1333]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:24:59.993845 ignition[1333]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:24:59.993845 ignition[1333]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:24:59.998394 ignition[1333]: INFO : PUT result: OK Nov 8 00:25:00.000487 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:25:00.003928 ignition[1333]: INFO : mount: mount passed Nov 8 00:25:00.003928 ignition[1333]: INFO : Ignition finished successfully Nov 8 00:25:00.005733 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:25:00.010262 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:25:00.030431 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:25:00.047159 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1345) Nov 8 00:25:00.050334 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:25:00.050404 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:25:00.052818 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 8 00:25:00.058145 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 8 00:25:00.059836 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:25:00.079932 ignition[1361]: INFO : Ignition 2.19.0 Nov 8 00:25:00.079932 ignition[1361]: INFO : Stage: files Nov 8 00:25:00.081562 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:25:00.081562 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:25:00.081562 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:25:00.081562 ignition[1361]: INFO : PUT result: OK Nov 8 00:25:00.084491 ignition[1361]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:25:00.085691 ignition[1361]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:25:00.085691 ignition[1361]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:25:00.120099 ignition[1361]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:25:00.121152 ignition[1361]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:25:00.121152 ignition[1361]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:25:00.120601 unknown[1361]: wrote ssh authorized keys file for user: core Nov 8 00:25:00.131668 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:25:00.132554 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 8 00:25:00.313358 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:25:00.473207 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:25:00.474574 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:25:00.474574 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:25:00.474574 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:25:00.474574 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:25:00.474574 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:25:00.474574 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:25:00.474574 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:25:00.474574 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:25:00.474574 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:25:00.474574 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:25:00.474574 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:25:00.482509 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:25:00.482509 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:25:00.482509 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 8 00:25:00.865100 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:25:03.660697 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:25:03.660697 ignition[1361]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:25:03.663281 ignition[1361]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:25:03.663281 ignition[1361]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:25:03.663281 ignition[1361]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:25:03.663281 ignition[1361]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:25:03.667862 ignition[1361]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:25:03.667862 ignition[1361]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:25:03.667862 ignition[1361]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:25:03.667862 ignition[1361]: INFO : files: files passed Nov 8 00:25:03.667862 ignition[1361]: INFO : Ignition finished successfully Nov 8 00:25:03.665727 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:25:03.672438 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:25:03.675994 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:25:03.684673 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:25:03.685380 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:25:03.693446 initrd-setup-root-after-ignition[1390]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:25:03.695353 initrd-setup-root-after-ignition[1394]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:25:03.696646 initrd-setup-root-after-ignition[1390]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:25:03.698166 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:25:03.699219 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:25:03.703395 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:25:03.747521 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:25:03.747666 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:25:03.749140 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:25:03.750263 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:25:03.751208 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:25:03.758455 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:25:03.772238 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:25:03.777365 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:25:03.791152 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:25:03.791846 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:25:03.792996 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:25:03.793780 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:25:03.793966 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:25:03.795099 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:25:03.795936 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:25:03.796729 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:25:03.797651 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:25:03.798434 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:25:03.799225 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:25:03.799978 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:25:03.800791 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:25:03.802034 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:25:03.802807 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:25:03.803532 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:25:03.803714 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:25:03.804954 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:25:03.805717 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:25:03.806412 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:25:03.806572 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:25:03.807226 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:25:03.807403 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:25:03.808755 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:25:03.809019 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:25:03.809796 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:25:03.809951 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:25:03.814467 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:25:03.815736 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:25:03.816549 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:25:03.820391 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:25:03.821704 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:25:03.822561 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:25:03.824469 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:25:03.824648 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:25:03.835351 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:25:03.839481 ignition[1414]: INFO : Ignition 2.19.0 Nov 8 00:25:03.839481 ignition[1414]: INFO : Stage: umount Nov 8 00:25:03.845425 ignition[1414]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:25:03.845425 ignition[1414]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:25:03.845425 ignition[1414]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:25:03.845425 ignition[1414]: INFO : PUT result: OK Nov 8 00:25:03.842913 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:25:03.852113 ignition[1414]: INFO : umount: umount passed Nov 8 00:25:03.852113 ignition[1414]: INFO : Ignition finished successfully Nov 8 00:25:03.853161 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:25:03.853299 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:25:03.854037 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:25:03.854101 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:25:03.855879 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:25:03.855939 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:25:03.857189 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:25:03.857247 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:25:03.858372 systemd[1]: Stopped target network.target - Network. Nov 8 00:25:03.859179 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:25:03.859245 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:25:03.861113 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:25:03.861590 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:25:03.862175 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:25:03.862703 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:25:03.864198 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:25:03.864771 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:25:03.865027 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:25:03.866100 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:25:03.866179 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:25:03.866933 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:25:03.867002 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:25:03.869732 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:25:03.869781 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:25:03.870271 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:25:03.871248 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:25:03.873539 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:25:03.874365 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:25:03.874489 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:25:03.876029 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:25:03.876462 systemd-networkd[1171]: eth0: DHCPv6 lease lost Nov 8 00:25:03.876480 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:25:03.879451 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:25:03.879611 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:25:03.881282 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:25:03.881422 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:25:03.885023 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:25:03.885108 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:25:03.890279 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:25:03.891626 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:25:03.891715 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:25:03.892515 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:25:03.892580 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:25:03.894778 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:25:03.894843 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:25:03.895438 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:25:03.895501 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:25:03.898297 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:25:03.915642 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:25:03.915873 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:25:03.917459 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:25:03.917585 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:25:03.919053 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:25:03.919168 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:25:03.919881 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:25:03.919931 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:25:03.920675 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:25:03.920739 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:25:03.921980 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:25:03.922045 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:25:03.923096 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:25:03.923180 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:25:03.928363 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:25:03.929105 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:25:03.929200 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:25:03.929812 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:25:03.929867 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:25:03.938771 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:25:03.938922 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:25:03.940417 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:25:03.946343 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:25:03.955795 systemd[1]: Switching root. Nov 8 00:25:03.999195 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Nov 8 00:25:03.999273 systemd-journald[178]: Journal stopped Nov 8 00:25:05.576320 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:25:05.576408 kernel: SELinux: policy capability open_perms=1 Nov 8 00:25:05.576429 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:25:05.576453 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:25:05.576476 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:25:05.576494 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:25:05.576518 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:25:05.576536 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:25:05.576554 kernel: audit: type=1403 audit(1762561504.424:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:25:05.576579 systemd[1]: Successfully loaded SELinux policy in 78.052ms. Nov 8 00:25:05.576606 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.767ms. Nov 8 00:25:05.576630 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:25:05.576654 systemd[1]: Detected virtualization amazon. Nov 8 00:25:05.576676 systemd[1]: Detected architecture x86-64. Nov 8 00:25:05.576695 systemd[1]: Detected first boot. Nov 8 00:25:05.576720 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:25:05.576738 zram_generator::config[1456]: No configuration found. Nov 8 00:25:05.576764 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:25:05.576783 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:25:05.576802 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:25:05.576832 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:25:05.576857 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:25:05.576876 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:25:05.576894 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:25:05.576913 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:25:05.576933 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:25:05.576953 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:25:05.576973 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:25:05.576992 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:25:05.578165 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:25:05.578213 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:25:05.578237 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:25:05.578259 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:25:05.578282 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:25:05.578305 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:25:05.578327 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:25:05.578348 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:25:05.578370 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:25:05.578392 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:25:05.578418 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:25:05.578440 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:25:05.578462 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:25:05.578484 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:25:05.578505 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:25:05.578526 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:25:05.578547 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:25:05.578569 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:25:05.578596 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:25:05.578617 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:25:05.578637 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:25:05.578658 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:25:05.578680 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:25:05.578700 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:25:05.578717 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:25:05.578735 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:25:05.578759 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:25:05.578779 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:25:05.578798 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:25:05.578823 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:25:05.578842 systemd[1]: Reached target machines.target - Containers. Nov 8 00:25:05.578862 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:25:05.578881 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:25:05.578900 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:25:05.578919 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:25:05.578942 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:25:05.578962 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:25:05.578981 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:25:05.579000 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:25:05.579020 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:25:05.579040 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:25:05.579059 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:25:05.579077 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:25:05.579100 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:25:05.579925 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:25:05.579959 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:25:05.579980 kernel: loop: module loaded Nov 8 00:25:05.580002 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:25:05.580023 kernel: fuse: init (API version 7.39) Nov 8 00:25:05.580044 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:25:05.580066 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:25:05.580088 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:25:05.580131 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:25:05.580154 systemd[1]: Stopped verity-setup.service. Nov 8 00:25:05.580176 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:25:05.580198 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:25:05.580219 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:25:05.580240 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:25:05.580262 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:25:05.580283 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:25:05.580304 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:25:05.580330 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:25:05.580350 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:25:05.580372 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:25:05.580393 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:25:05.580416 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:25:05.580439 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:25:05.581196 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:25:05.581225 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:25:05.581251 kernel: ACPI: bus type drm_connector registered Nov 8 00:25:05.581271 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:25:05.581293 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:25:05.581350 systemd-journald[1544]: Collecting audit messages is disabled. Nov 8 00:25:05.581406 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:25:05.581427 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:25:05.581447 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:25:05.581468 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:25:05.581495 systemd-journald[1544]: Journal started Nov 8 00:25:05.581535 systemd-journald[1544]: Runtime Journal (/run/log/journal/ec2ffc0e3ae90fdfcdd196095f4ca18d) is 4.7M, max 38.2M, 33.4M free. Nov 8 00:25:05.146220 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:25:05.213707 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 8 00:25:05.583151 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:25:05.214219 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:25:05.585553 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:25:05.586790 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:25:05.588002 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:25:05.603862 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:25:05.615262 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:25:05.627236 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:25:05.628248 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:25:05.628420 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:25:05.633981 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:25:05.641408 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:25:05.650414 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:25:05.651404 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:25:05.659339 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:25:05.664436 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:25:05.667302 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:25:05.670459 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:25:05.672250 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:25:05.695408 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:25:05.708449 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:25:05.714429 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:25:05.720222 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:25:05.721589 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:25:05.725334 systemd-journald[1544]: Time spent on flushing to /var/log/journal/ec2ffc0e3ae90fdfcdd196095f4ca18d is 99.254ms for 980 entries. Nov 8 00:25:05.725334 systemd-journald[1544]: System Journal (/var/log/journal/ec2ffc0e3ae90fdfcdd196095f4ca18d) is 8.0M, max 195.6M, 187.6M free. Nov 8 00:25:05.838656 systemd-journald[1544]: Received client request to flush runtime journal. Nov 8 00:25:05.838753 kernel: loop0: detected capacity change from 0 to 140768 Nov 8 00:25:05.723494 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:25:05.724840 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:25:05.729804 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:25:05.736012 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:25:05.745355 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:25:05.752247 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:25:05.761338 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:25:05.815699 udevadm[1595]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 8 00:25:05.842449 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:25:05.866982 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:25:05.869184 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:25:05.895621 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:25:05.897174 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:25:05.915322 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:25:05.924153 kernel: loop1: detected capacity change from 0 to 61336 Nov 8 00:25:05.967053 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. Nov 8 00:25:05.968202 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. Nov 8 00:25:05.977854 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:25:06.031309 kernel: loop2: detected capacity change from 0 to 219144 Nov 8 00:25:06.327149 kernel: loop3: detected capacity change from 0 to 142488 Nov 8 00:25:06.430214 kernel: loop4: detected capacity change from 0 to 140768 Nov 8 00:25:06.473728 kernel: loop5: detected capacity change from 0 to 61336 Nov 8 00:25:06.500139 kernel: loop6: detected capacity change from 0 to 219144 Nov 8 00:25:06.537142 kernel: loop7: detected capacity change from 0 to 142488 Nov 8 00:25:06.554368 (sd-merge)[1612]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Nov 8 00:25:06.555468 (sd-merge)[1612]: Merged extensions into '/usr'. Nov 8 00:25:06.562311 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:25:06.571464 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:25:06.572329 systemd[1]: Reloading requested from client PID 1586 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:25:06.572342 systemd[1]: Reloading... Nov 8 00:25:06.639368 systemd-udevd[1614]: Using default interface naming scheme 'v255'. Nov 8 00:25:06.676179 zram_generator::config[1639]: No configuration found. Nov 8 00:25:06.807800 (udev-worker)[1677]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:25:06.972215 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:25:06.991180 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Nov 8 00:25:07.016447 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 8 00:25:07.037167 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:25:07.043140 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Nov 8 00:25:07.050166 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Nov 8 00:25:07.063148 kernel: ACPI: button: Sleep Button [SLPF] Nov 8 00:25:07.111831 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:25:07.160580 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 8 00:25:07.161783 systemd[1]: Reloading finished in 588 ms. Nov 8 00:25:07.165381 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1693) Nov 8 00:25:07.200701 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:25:07.211041 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:25:07.269415 systemd[1]: Starting ensure-sysext.service... Nov 8 00:25:07.269875 ldconfig[1580]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:25:07.281343 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:25:07.299346 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:25:07.312339 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:25:07.313734 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:25:07.335228 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 8 00:25:07.351063 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:25:07.352111 systemd[1]: Reloading requested from client PID 1777 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:25:07.352309 systemd[1]: Reloading... Nov 8 00:25:07.373885 systemd-tmpfiles[1795]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:25:07.374920 systemd-tmpfiles[1795]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:25:07.376566 systemd-tmpfiles[1795]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:25:07.377042 systemd-tmpfiles[1795]: ACLs are not supported, ignoring. Nov 8 00:25:07.377387 systemd-tmpfiles[1795]: ACLs are not supported, ignoring. Nov 8 00:25:07.385690 systemd-tmpfiles[1795]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:25:07.385706 systemd-tmpfiles[1795]: Skipping /boot Nov 8 00:25:07.405748 systemd-tmpfiles[1795]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:25:07.405769 systemd-tmpfiles[1795]: Skipping /boot Nov 8 00:25:07.444152 zram_generator::config[1829]: No configuration found. Nov 8 00:25:07.602605 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:25:07.678538 systemd[1]: Reloading finished in 325 ms. Nov 8 00:25:07.700718 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:25:07.702013 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:25:07.721501 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:25:07.728254 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:25:07.736483 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:25:07.747828 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:25:07.753179 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:25:07.765845 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:25:07.770474 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:25:07.784450 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:25:07.790842 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:25:07.791161 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:25:07.793892 lvm[1893]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:25:07.802269 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:25:07.809510 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:25:07.818448 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:25:07.819257 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:25:07.819440 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:25:07.827874 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:25:07.829067 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:25:07.829350 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:25:07.829494 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:25:07.837876 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:25:07.838082 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:25:07.845423 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:25:07.845632 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:25:07.850785 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:25:07.853068 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:25:07.862869 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:25:07.864802 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:25:07.865088 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:25:07.865345 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:25:07.867398 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:25:07.869769 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:25:07.872320 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:25:07.874283 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:25:07.875039 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:25:07.885414 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:25:07.885642 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:25:07.887964 systemd[1]: Finished ensure-sysext.service. Nov 8 00:25:07.891252 augenrules[1919]: No rules Nov 8 00:25:07.896302 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:25:07.898157 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:25:07.905784 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:25:07.916268 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:25:07.916924 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:25:07.925449 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:25:07.933550 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:25:07.939646 lvm[1931]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:25:07.947238 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:25:07.962201 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:25:07.982631 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:25:07.991602 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:25:07.993406 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:25:08.051018 systemd-networkd[1785]: lo: Link UP Nov 8 00:25:08.051029 systemd-networkd[1785]: lo: Gained carrier Nov 8 00:25:08.053598 systemd-networkd[1785]: Enumeration completed Nov 8 00:25:08.053851 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:25:08.056613 systemd-resolved[1904]: Positive Trust Anchors: Nov 8 00:25:08.056637 systemd-resolved[1904]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:25:08.056691 systemd-resolved[1904]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:25:08.057446 systemd-networkd[1785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:25:08.057455 systemd-networkd[1785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:25:08.060018 systemd-networkd[1785]: eth0: Link UP Nov 8 00:25:08.060357 systemd-networkd[1785]: eth0: Gained carrier Nov 8 00:25:08.060463 systemd-networkd[1785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:25:08.067498 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:25:08.074144 systemd-resolved[1904]: Defaulting to hostname 'linux'. Nov 8 00:25:08.075201 systemd-networkd[1785]: eth0: DHCPv4 address 172.31.25.121/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 8 00:25:08.077257 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:25:08.077939 systemd[1]: Reached target network.target - Network. Nov 8 00:25:08.078436 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:25:08.078844 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:25:08.079351 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:25:08.079773 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:25:08.080359 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:25:08.080836 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:25:08.081280 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:25:08.081637 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:25:08.081675 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:25:08.082035 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:25:08.083856 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:25:08.085785 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:25:08.094478 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:25:08.095704 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:25:08.096331 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:25:08.096798 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:25:08.097386 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:25:08.097427 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:25:08.098593 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:25:08.103355 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:25:08.108983 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:25:08.112258 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:25:08.125356 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:25:08.125959 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:25:08.135379 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:25:08.159494 systemd[1]: Started ntpd.service - Network Time Service. Nov 8 00:25:08.163513 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:25:08.173821 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 8 00:25:08.183398 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:25:08.188308 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:25:08.195082 jq[1950]: false Nov 8 00:25:08.203369 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:25:08.211888 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:25:08.212844 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:25:08.221362 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:25:08.240145 extend-filesystems[1951]: Found loop4 Nov 8 00:25:08.240145 extend-filesystems[1951]: Found loop5 Nov 8 00:25:08.240145 extend-filesystems[1951]: Found loop6 Nov 8 00:25:08.240145 extend-filesystems[1951]: Found loop7 Nov 8 00:25:08.240145 extend-filesystems[1951]: Found nvme0n1 Nov 8 00:25:08.240145 extend-filesystems[1951]: Found nvme0n1p1 Nov 8 00:25:08.240145 extend-filesystems[1951]: Found nvme0n1p2 Nov 8 00:25:08.240145 extend-filesystems[1951]: Found nvme0n1p3 Nov 8 00:25:08.240145 extend-filesystems[1951]: Found usr Nov 8 00:25:08.240145 extend-filesystems[1951]: Found nvme0n1p4 Nov 8 00:25:08.240145 extend-filesystems[1951]: Found nvme0n1p6 Nov 8 00:25:08.240145 extend-filesystems[1951]: Found nvme0n1p7 Nov 8 00:25:08.240145 extend-filesystems[1951]: Found nvme0n1p9 Nov 8 00:25:08.240145 extend-filesystems[1951]: Checking size of /dev/nvme0n1p9 Nov 8 00:25:08.362715 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Nov 8 00:25:08.362754 coreos-metadata[1948]: Nov 08 00:25:08.351 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 8 00:25:08.362754 coreos-metadata[1948]: Nov 08 00:25:08.352 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 8 00:25:08.362754 coreos-metadata[1948]: Nov 08 00:25:08.354 INFO Fetch successful Nov 8 00:25:08.362754 coreos-metadata[1948]: Nov 08 00:25:08.354 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 8 00:25:08.362754 coreos-metadata[1948]: Nov 08 00:25:08.356 INFO Fetch successful Nov 8 00:25:08.362754 coreos-metadata[1948]: Nov 08 00:25:08.356 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 8 00:25:08.362754 coreos-metadata[1948]: Nov 08 00:25:08.358 INFO Fetch successful Nov 8 00:25:08.362754 coreos-metadata[1948]: Nov 08 00:25:08.358 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 8 00:25:08.362754 coreos-metadata[1948]: Nov 08 00:25:08.359 INFO Fetch successful Nov 8 00:25:08.362754 coreos-metadata[1948]: Nov 08 00:25:08.359 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 8 00:25:08.362754 coreos-metadata[1948]: Nov 08 00:25:08.361 INFO Fetch failed with 404: resource not found Nov 8 00:25:08.362754 coreos-metadata[1948]: Nov 08 00:25:08.361 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 8 00:25:08.362754 coreos-metadata[1948]: Nov 08 00:25:08.362 INFO Fetch successful Nov 8 00:25:08.362754 coreos-metadata[1948]: Nov 08 00:25:08.362 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 8 00:25:08.368664 extend-filesystems[1951]: Resized partition /dev/nvme0n1p9 Nov 8 00:25:08.329075 dbus-daemon[1949]: [system] SELinux support is enabled Nov 8 00:25:08.240980 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:25:08.375454 coreos-metadata[1948]: Nov 08 00:25:08.362 INFO Fetch successful Nov 8 00:25:08.375454 coreos-metadata[1948]: Nov 08 00:25:08.362 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 8 00:25:08.375454 coreos-metadata[1948]: Nov 08 00:25:08.363 INFO Fetch successful Nov 8 00:25:08.375454 coreos-metadata[1948]: Nov 08 00:25:08.363 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 8 00:25:08.375454 coreos-metadata[1948]: Nov 08 00:25:08.365 INFO Fetch successful Nov 8 00:25:08.375454 coreos-metadata[1948]: Nov 08 00:25:08.365 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 8 00:25:08.375454 coreos-metadata[1948]: Nov 08 00:25:08.371 INFO Fetch successful Nov 8 00:25:08.375790 ntpd[1953]: 8 Nov 00:25:08 ntpd[1953]: ntpd 4.2.8p17@1.4004-o Fri Nov 7 22:06:24 UTC 2025 (1): Starting Nov 8 00:25:08.375790 ntpd[1953]: 8 Nov 00:25:08 ntpd[1953]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 8 00:25:08.375790 ntpd[1953]: 8 Nov 00:25:08 ntpd[1953]: ---------------------------------------------------- Nov 8 00:25:08.375790 ntpd[1953]: 8 Nov 00:25:08 ntpd[1953]: ntp-4 is maintained by Network Time Foundation, Nov 8 00:25:08.375790 ntpd[1953]: 8 Nov 00:25:08 ntpd[1953]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 8 00:25:08.375790 ntpd[1953]: 8 Nov 00:25:08 ntpd[1953]: corporation. Support and training for ntp-4 are Nov 8 00:25:08.375790 ntpd[1953]: 8 Nov 00:25:08 ntpd[1953]: available at https://www.nwtime.org/support Nov 8 00:25:08.375790 ntpd[1953]: 8 Nov 00:25:08 ntpd[1953]: ---------------------------------------------------- Nov 8 00:25:08.375790 ntpd[1953]: 8 Nov 00:25:08 ntpd[1953]: proto: precision = 0.099 usec (-23) Nov 8 00:25:08.375790 ntpd[1953]: 8 Nov 00:25:08 ntpd[1953]: basedate set to 2025-10-26 Nov 8 00:25:08.375790 ntpd[1953]: 8 Nov 00:25:08 ntpd[1953]: gps base set to 2025-10-26 (week 2390) Nov 8 00:25:08.379334 extend-filesystems[1979]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:25:08.356557 dbus-daemon[1949]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1785 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 8 00:25:08.385991 update_engine[1968]: I20251108 00:25:08.336138 1968 main.cc:92] Flatcar Update Engine starting Nov 8 00:25:08.385991 update_engine[1968]: I20251108 00:25:08.360074 1968 update_check_scheduler.cc:74] Next update check in 6m10s Nov 8 00:25:08.253643 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:25:08.387399 ntpd[1953]: 8 Nov 00:25:08 ntpd[1953]: Listen and drop on 0 v6wildcard [::]:123 Nov 8 00:25:08.387399 ntpd[1953]: 8 Nov 00:25:08 ntpd[1953]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 8 00:25:08.366257 dbus-daemon[1949]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 8 00:25:08.253897 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:25:08.371454 ntpd[1953]: ntpd 4.2.8p17@1.4004-o Fri Nov 7 22:06:24 UTC 2025 (1): Starting Nov 8 00:25:08.264627 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:25:08.371480 ntpd[1953]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 8 00:25:08.397411 ntpd[1953]: 8 Nov 00:25:08 ntpd[1953]: Listen normally on 2 lo 127.0.0.1:123 Nov 8 00:25:08.397411 ntpd[1953]: 8 Nov 00:25:08 ntpd[1953]: Listen normally on 3 eth0 172.31.25.121:123 Nov 8 00:25:08.397411 ntpd[1953]: 8 Nov 00:25:08 ntpd[1953]: Listen normally on 4 lo [::1]:123 Nov 8 00:25:08.397411 ntpd[1953]: 8 Nov 00:25:08 ntpd[1953]: bind(21) AF_INET6 fe80::4dc:62ff:fe6e:2799%2#123 flags 0x11 failed: Cannot assign requested address Nov 8 00:25:08.397411 ntpd[1953]: 8 Nov 00:25:08 ntpd[1953]: unable to create socket on eth0 (5) for fe80::4dc:62ff:fe6e:2799%2#123 Nov 8 00:25:08.397411 ntpd[1953]: 8 Nov 00:25:08 ntpd[1953]: failed to init interface for address fe80::4dc:62ff:fe6e:2799%2 Nov 8 00:25:08.397411 ntpd[1953]: 8 Nov 00:25:08 ntpd[1953]: Listening on routing socket on fd #21 for interface updates Nov 8 00:25:08.397717 jq[1969]: true Nov 8 00:25:08.268060 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:25:08.371491 ntpd[1953]: ---------------------------------------------------- Nov 8 00:25:08.398070 tar[1976]: linux-amd64/LICENSE Nov 8 00:25:08.398070 tar[1976]: linux-amd64/helm Nov 8 00:25:08.281494 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:25:08.371501 ntpd[1953]: ntp-4 is maintained by Network Time Foundation, Nov 8 00:25:08.281790 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:25:08.371510 ntpd[1953]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 8 00:25:08.331668 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:25:08.406545 ntpd[1953]: 8 Nov 00:25:08 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:25:08.406545 ntpd[1953]: 8 Nov 00:25:08 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:25:08.371520 ntpd[1953]: corporation. Support and training for ntp-4 are Nov 8 00:25:08.346609 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:25:08.371530 ntpd[1953]: available at https://www.nwtime.org/support Nov 8 00:25:08.409255 jq[1993]: true Nov 8 00:25:08.346654 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:25:08.371540 ntpd[1953]: ---------------------------------------------------- Nov 8 00:25:08.361819 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:25:08.373390 ntpd[1953]: proto: precision = 0.099 usec (-23) Nov 8 00:25:08.361847 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:25:08.376349 ntpd[1953]: basedate set to 2025-10-26 Nov 8 00:25:08.366542 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:25:08.376369 ntpd[1953]: gps base set to 2025-10-26 (week 2390) Nov 8 00:25:08.374977 (ntainerd)[1986]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:25:08.382778 ntpd[1953]: Listen and drop on 0 v6wildcard [::]:123 Nov 8 00:25:08.379331 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 8 00:25:08.382833 ntpd[1953]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 8 00:25:08.383309 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:25:08.393986 ntpd[1953]: Listen normally on 2 lo 127.0.0.1:123 Nov 8 00:25:08.394085 ntpd[1953]: Listen normally on 3 eth0 172.31.25.121:123 Nov 8 00:25:08.394155 ntpd[1953]: Listen normally on 4 lo [::1]:123 Nov 8 00:25:08.394667 ntpd[1953]: bind(21) AF_INET6 fe80::4dc:62ff:fe6e:2799%2#123 flags 0x11 failed: Cannot assign requested address Nov 8 00:25:08.394721 ntpd[1953]: unable to create socket on eth0 (5) for fe80::4dc:62ff:fe6e:2799%2#123 Nov 8 00:25:08.394738 ntpd[1953]: failed to init interface for address fe80::4dc:62ff:fe6e:2799%2 Nov 8 00:25:08.394779 ntpd[1953]: Listening on routing socket on fd #21 for interface updates Nov 8 00:25:08.401012 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:25:08.401050 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:25:08.429249 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Nov 8 00:25:08.448382 extend-filesystems[1979]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 8 00:25:08.448382 extend-filesystems[1979]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 8 00:25:08.448382 extend-filesystems[1979]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Nov 8 00:25:08.450786 extend-filesystems[1951]: Resized filesystem in /dev/nvme0n1p9 Nov 8 00:25:08.451669 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:25:08.451944 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:25:08.458868 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 8 00:25:08.511093 sshd_keygen[1989]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:25:08.547325 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:25:08.548584 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:25:08.568289 systemd-logind[1962]: Watching system buttons on /dev/input/event1 (Power Button) Nov 8 00:25:08.568320 systemd-logind[1962]: Watching system buttons on /dev/input/event3 (Sleep Button) Nov 8 00:25:08.568344 systemd-logind[1962]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:25:08.569949 systemd-logind[1962]: New seat seat0. Nov 8 00:25:08.577020 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:25:08.607699 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:25:08.608429 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:25:08.621791 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1693) Nov 8 00:25:08.657367 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:25:08.657645 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:25:08.667536 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:25:08.670938 bash[2042]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:25:08.674641 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:25:08.687553 systemd[1]: Starting sshkeys.service... Nov 8 00:25:08.729593 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:25:08.759338 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:25:08.769221 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:25:08.770973 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:25:08.783011 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 8 00:25:08.795269 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 8 00:25:08.913575 locksmithd[1995]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:25:08.919452 dbus-daemon[1949]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 8 00:25:08.919808 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 8 00:25:08.951061 dbus-daemon[1949]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1994 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 8 00:25:08.963674 systemd[1]: Starting polkit.service - Authorization Manager... Nov 8 00:25:08.975973 coreos-metadata[2077]: Nov 08 00:25:08.974 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 8 00:25:08.981017 coreos-metadata[2077]: Nov 08 00:25:08.980 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 8 00:25:08.986904 coreos-metadata[2077]: Nov 08 00:25:08.986 INFO Fetch successful Nov 8 00:25:08.986904 coreos-metadata[2077]: Nov 08 00:25:08.986 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 8 00:25:09.004799 coreos-metadata[2077]: Nov 08 00:25:08.996 INFO Fetch successful Nov 8 00:25:09.007244 unknown[2077]: wrote ssh authorized keys file for user: core Nov 8 00:25:09.070622 polkitd[2136]: Started polkitd version 121 Nov 8 00:25:09.071393 update-ssh-keys[2153]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:25:09.072286 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 8 00:25:09.079852 systemd[1]: Finished sshkeys.service. Nov 8 00:25:09.108111 polkitd[2136]: Loading rules from directory /etc/polkit-1/rules.d Nov 8 00:25:09.110181 polkitd[2136]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 8 00:25:09.112097 polkitd[2136]: Finished loading, compiling and executing 2 rules Nov 8 00:25:09.116040 dbus-daemon[1949]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 8 00:25:09.117639 systemd[1]: Started polkit.service - Authorization Manager. Nov 8 00:25:09.118746 polkitd[2136]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 8 00:25:09.150200 systemd-resolved[1904]: System hostname changed to 'ip-172-31-25-121'. Nov 8 00:25:09.150201 systemd-hostnamed[1994]: Hostname set to (transient) Nov 8 00:25:09.189802 containerd[1986]: time="2025-11-08T00:25:09.189709809Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:25:09.228832 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:25:09.236528 systemd[1]: Started sshd@0-172.31.25.121:22-139.178.89.65:51994.service - OpenSSH per-connection server daemon (139.178.89.65:51994). Nov 8 00:25:09.261268 containerd[1986]: time="2025-11-08T00:25:09.261175318Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:25:09.266750 containerd[1986]: time="2025-11-08T00:25:09.265387241Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:25:09.266750 containerd[1986]: time="2025-11-08T00:25:09.265440144Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:25:09.266750 containerd[1986]: time="2025-11-08T00:25:09.265464722Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:25:09.266750 containerd[1986]: time="2025-11-08T00:25:09.265654627Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:25:09.266750 containerd[1986]: time="2025-11-08T00:25:09.265680143Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:25:09.266750 containerd[1986]: time="2025-11-08T00:25:09.265764085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:25:09.266750 containerd[1986]: time="2025-11-08T00:25:09.265782874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:25:09.266750 containerd[1986]: time="2025-11-08T00:25:09.266012450Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:25:09.266750 containerd[1986]: time="2025-11-08T00:25:09.266033127Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:25:09.266750 containerd[1986]: time="2025-11-08T00:25:09.266053094Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:25:09.266750 containerd[1986]: time="2025-11-08T00:25:09.266067920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:25:09.268469 containerd[1986]: time="2025-11-08T00:25:09.266180198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:25:09.269271 containerd[1986]: time="2025-11-08T00:25:09.268876041Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:25:09.269610 containerd[1986]: time="2025-11-08T00:25:09.269581774Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:25:09.270086 containerd[1986]: time="2025-11-08T00:25:09.270060880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:25:09.270623 containerd[1986]: time="2025-11-08T00:25:09.270600097Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:25:09.270772 containerd[1986]: time="2025-11-08T00:25:09.270754739Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:25:09.275981 containerd[1986]: time="2025-11-08T00:25:09.275939765Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:25:09.276178 containerd[1986]: time="2025-11-08T00:25:09.276156479Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:25:09.276584 containerd[1986]: time="2025-11-08T00:25:09.276292056Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:25:09.276584 containerd[1986]: time="2025-11-08T00:25:09.276322338Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:25:09.276584 containerd[1986]: time="2025-11-08T00:25:09.276345347Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:25:09.276584 containerd[1986]: time="2025-11-08T00:25:09.276522365Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:25:09.279227 containerd[1986]: time="2025-11-08T00:25:09.279190159Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:25:09.281142 containerd[1986]: time="2025-11-08T00:25:09.280326272Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:25:09.281142 containerd[1986]: time="2025-11-08T00:25:09.280365383Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:25:09.281142 containerd[1986]: time="2025-11-08T00:25:09.280389136Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:25:09.281142 containerd[1986]: time="2025-11-08T00:25:09.280417441Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:25:09.281142 containerd[1986]: time="2025-11-08T00:25:09.280444216Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:25:09.281142 containerd[1986]: time="2025-11-08T00:25:09.280469928Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:25:09.281142 containerd[1986]: time="2025-11-08T00:25:09.280498253Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:25:09.281142 containerd[1986]: time="2025-11-08T00:25:09.280529133Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:25:09.281142 containerd[1986]: time="2025-11-08T00:25:09.280559137Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:25:09.281142 containerd[1986]: time="2025-11-08T00:25:09.280578671Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:25:09.281142 containerd[1986]: time="2025-11-08T00:25:09.280605663Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:25:09.281142 containerd[1986]: time="2025-11-08T00:25:09.280642498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:25:09.281142 containerd[1986]: time="2025-11-08T00:25:09.280672386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:25:09.281142 containerd[1986]: time="2025-11-08T00:25:09.280697472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:25:09.281680 containerd[1986]: time="2025-11-08T00:25:09.280724021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:25:09.281680 containerd[1986]: time="2025-11-08T00:25:09.280758948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:25:09.281680 containerd[1986]: time="2025-11-08T00:25:09.280785618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:25:09.281680 containerd[1986]: time="2025-11-08T00:25:09.280806659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:25:09.281680 containerd[1986]: time="2025-11-08T00:25:09.280842518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:25:09.281680 containerd[1986]: time="2025-11-08T00:25:09.280871280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:25:09.281680 containerd[1986]: time="2025-11-08T00:25:09.280903832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:25:09.281680 containerd[1986]: time="2025-11-08T00:25:09.280929013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:25:09.281680 containerd[1986]: time="2025-11-08T00:25:09.280953403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:25:09.281680 containerd[1986]: time="2025-11-08T00:25:09.280983857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:25:09.281680 containerd[1986]: time="2025-11-08T00:25:09.281015064Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:25:09.281680 containerd[1986]: time="2025-11-08T00:25:09.281052896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:25:09.281680 containerd[1986]: time="2025-11-08T00:25:09.281078931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:25:09.281680 containerd[1986]: time="2025-11-08T00:25:09.281097379Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:25:09.283143 containerd[1986]: time="2025-11-08T00:25:09.282464007Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:25:09.283143 containerd[1986]: time="2025-11-08T00:25:09.282837007Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:25:09.283143 containerd[1986]: time="2025-11-08T00:25:09.282862344Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:25:09.283143 containerd[1986]: time="2025-11-08T00:25:09.282883274Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:25:09.283143 containerd[1986]: time="2025-11-08T00:25:09.282899379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:25:09.283143 containerd[1986]: time="2025-11-08T00:25:09.282926204Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:25:09.283143 containerd[1986]: time="2025-11-08T00:25:09.282941604Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:25:09.283143 containerd[1986]: time="2025-11-08T00:25:09.282957433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:25:09.284675 containerd[1986]: time="2025-11-08T00:25:09.283982369Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:25:09.284675 containerd[1986]: time="2025-11-08T00:25:09.284079896Z" level=info msg="Connect containerd service" Nov 8 00:25:09.284675 containerd[1986]: time="2025-11-08T00:25:09.284149760Z" level=info msg="using legacy CRI server" Nov 8 00:25:09.284675 containerd[1986]: time="2025-11-08T00:25:09.284160675Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:25:09.284675 containerd[1986]: time="2025-11-08T00:25:09.284293115Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:25:09.285598 containerd[1986]: time="2025-11-08T00:25:09.285570908Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:25:09.285844 containerd[1986]: time="2025-11-08T00:25:09.285801198Z" level=info msg="Start subscribing containerd event" Nov 8 00:25:09.285941 containerd[1986]: time="2025-11-08T00:25:09.285927847Z" level=info msg="Start recovering state" Nov 8 00:25:09.286255 containerd[1986]: time="2025-11-08T00:25:09.286061208Z" level=info msg="Start event monitor" Nov 8 00:25:09.286255 containerd[1986]: time="2025-11-08T00:25:09.286095815Z" level=info msg="Start snapshots syncer" Nov 8 00:25:09.286255 containerd[1986]: time="2025-11-08T00:25:09.286109974Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:25:09.286255 containerd[1986]: time="2025-11-08T00:25:09.286138513Z" level=info msg="Start streaming server" Nov 8 00:25:09.288108 containerd[1986]: time="2025-11-08T00:25:09.286848097Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:25:09.288108 containerd[1986]: time="2025-11-08T00:25:09.286920430Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:25:09.287083 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:25:09.289470 containerd[1986]: time="2025-11-08T00:25:09.288892404Z" level=info msg="containerd successfully booted in 0.100547s" Nov 8 00:25:09.371926 ntpd[1953]: bind(24) AF_INET6 fe80::4dc:62ff:fe6e:2799%2#123 flags 0x11 failed: Cannot assign requested address Nov 8 00:25:09.372482 ntpd[1953]: 8 Nov 00:25:09 ntpd[1953]: bind(24) AF_INET6 fe80::4dc:62ff:fe6e:2799%2#123 flags 0x11 failed: Cannot assign requested address Nov 8 00:25:09.372482 ntpd[1953]: 8 Nov 00:25:09 ntpd[1953]: unable to create socket on eth0 (6) for fe80::4dc:62ff:fe6e:2799%2#123 Nov 8 00:25:09.372482 ntpd[1953]: 8 Nov 00:25:09 ntpd[1953]: failed to init interface for address fe80::4dc:62ff:fe6e:2799%2 Nov 8 00:25:09.372341 ntpd[1953]: unable to create socket on eth0 (6) for fe80::4dc:62ff:fe6e:2799%2#123 Nov 8 00:25:09.372359 ntpd[1953]: failed to init interface for address fe80::4dc:62ff:fe6e:2799%2 Nov 8 00:25:09.393366 systemd-networkd[1785]: eth0: Gained IPv6LL Nov 8 00:25:09.397261 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:25:09.399977 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:25:09.412736 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 8 00:25:09.425028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:25:09.430519 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:25:09.478541 sshd[2168]: Accepted publickey for core from 139.178.89.65 port 51994 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:25:09.483695 sshd[2168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:09.509549 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:25:09.518617 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:25:09.545212 amazon-ssm-agent[2173]: Initializing new seelog logger Nov 8 00:25:09.545212 amazon-ssm-agent[2173]: New Seelog Logger Creation Complete Nov 8 00:25:09.545212 amazon-ssm-agent[2173]: 2025/11/08 00:25:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:25:09.545212 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:25:09.545212 amazon-ssm-agent[2173]: 2025/11/08 00:25:09 processing appconfig overrides Nov 8 00:25:09.545212 amazon-ssm-agent[2173]: 2025/11/08 00:25:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:25:09.545212 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:25:09.545212 amazon-ssm-agent[2173]: 2025/11/08 00:25:09 processing appconfig overrides Nov 8 00:25:09.545212 amazon-ssm-agent[2173]: 2025/11/08 00:25:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:25:09.545212 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:25:09.545212 amazon-ssm-agent[2173]: 2025/11/08 00:25:09 processing appconfig overrides Nov 8 00:25:09.545212 amazon-ssm-agent[2173]: 2025-11-08 00:25:09 INFO Proxy environment variables: Nov 8 00:25:09.535427 systemd-logind[1962]: New session 1 of user core. Nov 8 00:25:09.548645 amazon-ssm-agent[2173]: 2025/11/08 00:25:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:25:09.548645 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:25:09.548784 amazon-ssm-agent[2173]: 2025/11/08 00:25:09 processing appconfig overrides Nov 8 00:25:09.558469 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:25:09.561806 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:25:09.576541 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:25:09.592961 (systemd)[2192]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:25:09.642432 amazon-ssm-agent[2173]: 2025-11-08 00:25:09 INFO https_proxy: Nov 8 00:25:09.691146 tar[1976]: linux-amd64/README.md Nov 8 00:25:09.723223 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:25:09.742575 amazon-ssm-agent[2173]: 2025-11-08 00:25:09 INFO http_proxy: Nov 8 00:25:09.815095 systemd[2192]: Queued start job for default target default.target. Nov 8 00:25:09.820896 systemd[2192]: Created slice app.slice - User Application Slice. Nov 8 00:25:09.820940 systemd[2192]: Reached target paths.target - Paths. Nov 8 00:25:09.820960 systemd[2192]: Reached target timers.target - Timers. Nov 8 00:25:09.825270 systemd[2192]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:25:09.841041 amazon-ssm-agent[2173]: 2025-11-08 00:25:09 INFO no_proxy: Nov 8 00:25:09.848033 systemd[2192]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:25:09.848365 systemd[2192]: Reached target sockets.target - Sockets. Nov 8 00:25:09.848473 systemd[2192]: Reached target basic.target - Basic System. Nov 8 00:25:09.848703 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:25:09.852073 systemd[2192]: Reached target default.target - Main User Target. Nov 8 00:25:09.852394 systemd[2192]: Startup finished in 240ms. Nov 8 00:25:09.859182 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:25:09.940258 amazon-ssm-agent[2173]: 2025-11-08 00:25:09 INFO Checking if agent identity type OnPrem can be assumed Nov 8 00:25:10.024511 systemd[1]: Started sshd@1-172.31.25.121:22-139.178.89.65:51998.service - OpenSSH per-connection server daemon (139.178.89.65:51998). Nov 8 00:25:10.039208 amazon-ssm-agent[2173]: 2025-11-08 00:25:09 INFO Checking if agent identity type EC2 can be assumed Nov 8 00:25:10.067822 amazon-ssm-agent[2173]: 2025-11-08 00:25:09 INFO Agent will take identity from EC2 Nov 8 00:25:10.067993 amazon-ssm-agent[2173]: 2025-11-08 00:25:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 8 00:25:10.068140 amazon-ssm-agent[2173]: 2025-11-08 00:25:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 8 00:25:10.068140 amazon-ssm-agent[2173]: 2025-11-08 00:25:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 8 00:25:10.068140 amazon-ssm-agent[2173]: 2025-11-08 00:25:09 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Nov 8 00:25:10.068140 amazon-ssm-agent[2173]: 2025-11-08 00:25:09 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Nov 8 00:25:10.068140 amazon-ssm-agent[2173]: 2025-11-08 00:25:09 INFO [amazon-ssm-agent] Starting Core Agent Nov 8 00:25:10.068140 amazon-ssm-agent[2173]: 2025-11-08 00:25:09 INFO [amazon-ssm-agent] registrar detected. Attempting registration Nov 8 00:25:10.068140 amazon-ssm-agent[2173]: 2025-11-08 00:25:09 INFO [Registrar] Starting registrar module Nov 8 00:25:10.068140 amazon-ssm-agent[2173]: 2025-11-08 00:25:09 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Nov 8 00:25:10.068140 amazon-ssm-agent[2173]: 2025-11-08 00:25:10 INFO [EC2Identity] EC2 registration was successful. Nov 8 00:25:10.068140 amazon-ssm-agent[2173]: 2025-11-08 00:25:10 INFO [CredentialRefresher] credentialRefresher has started Nov 8 00:25:10.068462 amazon-ssm-agent[2173]: 2025-11-08 00:25:10 INFO [CredentialRefresher] Starting credentials refresher loop Nov 8 00:25:10.068462 amazon-ssm-agent[2173]: 2025-11-08 00:25:10 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 8 00:25:10.139719 amazon-ssm-agent[2173]: 2025-11-08 00:25:10 INFO [CredentialRefresher] Next credential rotation will be in 32.26665233866667 minutes Nov 8 00:25:10.184231 sshd[2208]: Accepted publickey for core from 139.178.89.65 port 51998 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:25:10.185784 sshd[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:10.190339 systemd-logind[1962]: New session 2 of user core. Nov 8 00:25:10.201692 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:25:10.322833 sshd[2208]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:10.326569 systemd[1]: sshd@1-172.31.25.121:22-139.178.89.65:51998.service: Deactivated successfully. Nov 8 00:25:10.328561 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:25:10.330106 systemd-logind[1962]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:25:10.331538 systemd-logind[1962]: Removed session 2. Nov 8 00:25:10.353315 systemd[1]: Started sshd@2-172.31.25.121:22-139.178.89.65:52008.service - OpenSSH per-connection server daemon (139.178.89.65:52008). Nov 8 00:25:10.517338 sshd[2215]: Accepted publickey for core from 139.178.89.65 port 52008 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:25:10.518741 sshd[2215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:10.523737 systemd-logind[1962]: New session 3 of user core. Nov 8 00:25:10.528325 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:25:10.656891 sshd[2215]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:10.660778 systemd[1]: sshd@2-172.31.25.121:22-139.178.89.65:52008.service: Deactivated successfully. Nov 8 00:25:10.662806 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:25:10.664278 systemd-logind[1962]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:25:10.665665 systemd-logind[1962]: Removed session 3. Nov 8 00:25:11.082615 amazon-ssm-agent[2173]: 2025-11-08 00:25:11 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 8 00:25:11.183222 amazon-ssm-agent[2173]: 2025-11-08 00:25:11 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2222) started Nov 8 00:25:11.283748 amazon-ssm-agent[2173]: 2025-11-08 00:25:11 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 8 00:25:12.371968 ntpd[1953]: Listen normally on 7 eth0 [fe80::4dc:62ff:fe6e:2799%2]:123 Nov 8 00:25:12.372375 ntpd[1953]: 8 Nov 00:25:12 ntpd[1953]: Listen normally on 7 eth0 [fe80::4dc:62ff:fe6e:2799%2]:123 Nov 8 00:25:12.530316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:12.531515 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:25:12.532343 systemd[1]: Startup finished in 592ms (kernel) + 8.693s (initrd) + 8.183s (userspace) = 17.469s. Nov 8 00:25:12.536019 (kubelet)[2238]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:25:14.135853 kubelet[2238]: E1108 00:25:14.135797 2238 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:25:14.138687 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:25:14.138892 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:25:15.210336 systemd-resolved[1904]: Clock change detected. Flushing caches. Nov 8 00:25:20.532687 systemd[1]: Started sshd@3-172.31.25.121:22-139.178.89.65:37262.service - OpenSSH per-connection server daemon (139.178.89.65:37262). Nov 8 00:25:20.691561 sshd[2250]: Accepted publickey for core from 139.178.89.65 port 37262 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:25:20.692989 sshd[2250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:20.698547 systemd-logind[1962]: New session 4 of user core. Nov 8 00:25:20.705534 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:25:20.827990 sshd[2250]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:20.831207 systemd[1]: sshd@3-172.31.25.121:22-139.178.89.65:37262.service: Deactivated successfully. Nov 8 00:25:20.833051 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:25:20.834409 systemd-logind[1962]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:25:20.835699 systemd-logind[1962]: Removed session 4. Nov 8 00:25:20.865660 systemd[1]: Started sshd@4-172.31.25.121:22-139.178.89.65:37268.service - OpenSSH per-connection server daemon (139.178.89.65:37268). Nov 8 00:25:21.025158 sshd[2257]: Accepted publickey for core from 139.178.89.65 port 37268 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:25:21.026643 sshd[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:21.031741 systemd-logind[1962]: New session 5 of user core. Nov 8 00:25:21.033483 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:25:21.159122 sshd[2257]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:21.162193 systemd[1]: sshd@4-172.31.25.121:22-139.178.89.65:37268.service: Deactivated successfully. Nov 8 00:25:21.164114 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:25:21.165334 systemd-logind[1962]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:25:21.166607 systemd-logind[1962]: Removed session 5. Nov 8 00:25:21.191282 systemd[1]: Started sshd@5-172.31.25.121:22-139.178.89.65:37284.service - OpenSSH per-connection server daemon (139.178.89.65:37284). Nov 8 00:25:21.352976 sshd[2264]: Accepted publickey for core from 139.178.89.65 port 37284 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:25:21.354369 sshd[2264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:21.358710 systemd-logind[1962]: New session 6 of user core. Nov 8 00:25:21.365612 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:25:21.489916 sshd[2264]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:21.493232 systemd[1]: sshd@5-172.31.25.121:22-139.178.89.65:37284.service: Deactivated successfully. Nov 8 00:25:21.495530 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:25:21.497456 systemd-logind[1962]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:25:21.498671 systemd-logind[1962]: Removed session 6. Nov 8 00:25:21.527670 systemd[1]: Started sshd@6-172.31.25.121:22-139.178.89.65:37286.service - OpenSSH per-connection server daemon (139.178.89.65:37286). Nov 8 00:25:21.687741 sshd[2271]: Accepted publickey for core from 139.178.89.65 port 37286 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:25:21.689207 sshd[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:21.693607 systemd-logind[1962]: New session 7 of user core. Nov 8 00:25:21.701541 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:25:21.834560 sudo[2274]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:25:21.834880 sudo[2274]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:25:21.851875 sudo[2274]: pam_unix(sudo:session): session closed for user root Nov 8 00:25:21.875877 sshd[2271]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:21.879281 systemd[1]: sshd@6-172.31.25.121:22-139.178.89.65:37286.service: Deactivated successfully. Nov 8 00:25:21.881061 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:25:21.882374 systemd-logind[1962]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:25:21.883561 systemd-logind[1962]: Removed session 7. Nov 8 00:25:21.913731 systemd[1]: Started sshd@7-172.31.25.121:22-139.178.89.65:37294.service - OpenSSH per-connection server daemon (139.178.89.65:37294). Nov 8 00:25:22.078024 sshd[2279]: Accepted publickey for core from 139.178.89.65 port 37294 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:25:22.079584 sshd[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:22.084841 systemd-logind[1962]: New session 8 of user core. Nov 8 00:25:22.090543 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:25:22.195835 sudo[2283]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:25:22.196250 sudo[2283]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:25:22.200437 sudo[2283]: pam_unix(sudo:session): session closed for user root Nov 8 00:25:22.206506 sudo[2282]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:25:22.206916 sudo[2282]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:25:22.228021 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:25:22.230333 auditctl[2286]: No rules Nov 8 00:25:22.230333 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:25:22.230580 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:25:22.233918 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:25:22.270731 augenrules[2304]: No rules Nov 8 00:25:22.272188 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:25:22.273442 sudo[2282]: pam_unix(sudo:session): session closed for user root Nov 8 00:25:22.298137 sshd[2279]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:22.301678 systemd[1]: sshd@7-172.31.25.121:22-139.178.89.65:37294.service: Deactivated successfully. Nov 8 00:25:22.303834 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:25:22.305441 systemd-logind[1962]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:25:22.306715 systemd-logind[1962]: Removed session 8. Nov 8 00:25:22.336649 systemd[1]: Started sshd@8-172.31.25.121:22-139.178.89.65:37298.service - OpenSSH per-connection server daemon (139.178.89.65:37298). Nov 8 00:25:22.489638 sshd[2312]: Accepted publickey for core from 139.178.89.65 port 37298 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:25:22.492328 sshd[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:22.497056 systemd-logind[1962]: New session 9 of user core. Nov 8 00:25:22.504550 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:25:22.600997 sudo[2315]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:25:22.601327 sudo[2315]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:25:23.113653 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:25:23.115789 (dockerd)[2330]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:25:23.716621 dockerd[2330]: time="2025-11-08T00:25:23.716557267Z" level=info msg="Starting up" Nov 8 00:25:23.915714 dockerd[2330]: time="2025-11-08T00:25:23.915660566Z" level=info msg="Loading containers: start." Nov 8 00:25:24.085307 kernel: Initializing XFRM netlink socket Nov 8 00:25:24.142882 (udev-worker)[2395]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:25:24.155212 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:25:24.165781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:25:24.209986 systemd-networkd[1785]: docker0: Link UP Nov 8 00:25:24.235962 dockerd[2330]: time="2025-11-08T00:25:24.235915598Z" level=info msg="Loading containers: done." Nov 8 00:25:24.263412 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck412074885-merged.mount: Deactivated successfully. Nov 8 00:25:24.316357 dockerd[2330]: time="2025-11-08T00:25:24.315901757Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:25:24.316357 dockerd[2330]: time="2025-11-08T00:25:24.316010205Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:25:24.316357 dockerd[2330]: time="2025-11-08T00:25:24.316130890Z" level=info msg="Daemon has completed initialization" Nov 8 00:25:24.377157 dockerd[2330]: time="2025-11-08T00:25:24.377086840Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:25:24.378114 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:25:24.461074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:24.469829 (kubelet)[2473]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:25:24.513079 kubelet[2473]: E1108 00:25:24.513026 2473 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:25:24.517148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:25:24.517382 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:25:26.034277 containerd[1986]: time="2025-11-08T00:25:26.034237569Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 8 00:25:26.632259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3507454089.mount: Deactivated successfully. Nov 8 00:25:28.058605 containerd[1986]: time="2025-11-08T00:25:28.058547211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:28.060030 containerd[1986]: time="2025-11-08T00:25:28.059981919Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Nov 8 00:25:28.062323 containerd[1986]: time="2025-11-08T00:25:28.060954559Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:28.063964 containerd[1986]: time="2025-11-08T00:25:28.063926694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:28.065261 containerd[1986]: time="2025-11-08T00:25:28.065223796Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.030942904s" Nov 8 00:25:28.065421 containerd[1986]: time="2025-11-08T00:25:28.065399637Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 8 00:25:28.066184 containerd[1986]: time="2025-11-08T00:25:28.066021440Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 8 00:25:30.185754 containerd[1986]: time="2025-11-08T00:25:30.185689270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:30.189072 containerd[1986]: time="2025-11-08T00:25:30.188776478Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Nov 8 00:25:30.192709 containerd[1986]: time="2025-11-08T00:25:30.192633307Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:30.200845 containerd[1986]: time="2025-11-08T00:25:30.200585716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:30.202686 containerd[1986]: time="2025-11-08T00:25:30.202018171Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 2.135686674s" Nov 8 00:25:30.202686 containerd[1986]: time="2025-11-08T00:25:30.202053926Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 8 00:25:30.205204 containerd[1986]: time="2025-11-08T00:25:30.205167089Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 8 00:25:31.545961 containerd[1986]: time="2025-11-08T00:25:31.545900520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:31.547608 containerd[1986]: time="2025-11-08T00:25:31.547374107Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Nov 8 00:25:31.549531 containerd[1986]: time="2025-11-08T00:25:31.549482142Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:31.553797 containerd[1986]: time="2025-11-08T00:25:31.553725490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:31.555021 containerd[1986]: time="2025-11-08T00:25:31.554825681Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.349467897s" Nov 8 00:25:31.555021 containerd[1986]: time="2025-11-08T00:25:31.554872002Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 8 00:25:31.556027 containerd[1986]: time="2025-11-08T00:25:31.555540773Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 8 00:25:32.715012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount180690855.mount: Deactivated successfully. Nov 8 00:25:33.213166 containerd[1986]: time="2025-11-08T00:25:33.213101675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:33.216654 containerd[1986]: time="2025-11-08T00:25:33.216564220Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Nov 8 00:25:33.220548 containerd[1986]: time="2025-11-08T00:25:33.220479308Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:33.225143 containerd[1986]: time="2025-11-08T00:25:33.225080069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:33.225860 containerd[1986]: time="2025-11-08T00:25:33.225723921Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.670149543s" Nov 8 00:25:33.225860 containerd[1986]: time="2025-11-08T00:25:33.225761061Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 8 00:25:33.226613 containerd[1986]: time="2025-11-08T00:25:33.226463526Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 8 00:25:33.702103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3159126319.mount: Deactivated successfully. Nov 8 00:25:34.620593 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:25:34.625601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:25:34.886778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:34.899753 (kubelet)[2618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:25:34.981136 kubelet[2618]: E1108 00:25:34.980375 2618 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:25:34.983967 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:25:34.984163 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:25:35.406298 containerd[1986]: time="2025-11-08T00:25:35.405244191Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Nov 8 00:25:35.406298 containerd[1986]: time="2025-11-08T00:25:35.406219851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:35.408932 containerd[1986]: time="2025-11-08T00:25:35.408893262Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:35.410355 containerd[1986]: time="2025-11-08T00:25:35.410309356Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.183796848s" Nov 8 00:25:35.410461 containerd[1986]: time="2025-11-08T00:25:35.410357522Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 8 00:25:35.410894 containerd[1986]: time="2025-11-08T00:25:35.410847182Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 8 00:25:35.411782 containerd[1986]: time="2025-11-08T00:25:35.411748087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:35.838804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3701992806.mount: Deactivated successfully. Nov 8 00:25:35.843594 containerd[1986]: time="2025-11-08T00:25:35.843542266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:35.844686 containerd[1986]: time="2025-11-08T00:25:35.844504748Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Nov 8 00:25:35.846313 containerd[1986]: time="2025-11-08T00:25:35.845565669Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:35.847963 containerd[1986]: time="2025-11-08T00:25:35.847918413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:35.848682 containerd[1986]: time="2025-11-08T00:25:35.848528111Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 437.647226ms" Nov 8 00:25:35.848682 containerd[1986]: time="2025-11-08T00:25:35.848561731Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 8 00:25:35.849122 containerd[1986]: time="2025-11-08T00:25:35.849093816Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 8 00:25:39.005413 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 8 00:25:40.429633 containerd[1986]: time="2025-11-08T00:25:40.429555043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:40.440397 containerd[1986]: time="2025-11-08T00:25:40.440279816Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Nov 8 00:25:40.442928 containerd[1986]: time="2025-11-08T00:25:40.442820648Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:40.447380 containerd[1986]: time="2025-11-08T00:25:40.447267943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:40.448806 containerd[1986]: time="2025-11-08T00:25:40.448622756Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 4.599493125s" Nov 8 00:25:40.448806 containerd[1986]: time="2025-11-08T00:25:40.448674263Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 8 00:25:44.095873 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:44.109926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:25:44.146532 systemd[1]: Reloading requested from client PID 2703 ('systemctl') (unit session-9.scope)... Nov 8 00:25:44.146551 systemd[1]: Reloading... Nov 8 00:25:44.284323 zram_generator::config[2743]: No configuration found. Nov 8 00:25:44.424233 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:25:44.518211 systemd[1]: Reloading finished in 371 ms. Nov 8 00:25:44.572022 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:25:44.572133 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:25:44.572393 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:44.578670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:25:44.770274 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:44.776540 (kubelet)[2803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:25:44.825828 kubelet[2803]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:25:44.825828 kubelet[2803]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:25:44.826259 kubelet[2803]: I1108 00:25:44.825912 2803 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:25:45.689307 kubelet[2803]: I1108 00:25:45.688487 2803 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 8 00:25:45.689307 kubelet[2803]: I1108 00:25:45.688521 2803 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:25:45.693763 kubelet[2803]: I1108 00:25:45.693699 2803 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 8 00:25:45.694135 kubelet[2803]: I1108 00:25:45.694111 2803 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:25:45.694548 kubelet[2803]: I1108 00:25:45.694526 2803 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:25:45.704007 kubelet[2803]: I1108 00:25:45.703964 2803 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:25:45.712167 kubelet[2803]: E1108 00:25:45.712074 2803 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.25.121:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.25.121:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:25:45.724825 kubelet[2803]: E1108 00:25:45.724524 2803 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:25:45.724825 kubelet[2803]: I1108 00:25:45.724610 2803 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 8 00:25:45.730069 kubelet[2803]: I1108 00:25:45.730006 2803 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 8 00:25:45.735849 kubelet[2803]: I1108 00:25:45.735777 2803 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:25:45.737554 kubelet[2803]: I1108 00:25:45.735836 2803 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-121","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:25:45.737554 kubelet[2803]: I1108 00:25:45.737537 2803 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:25:45.737554 kubelet[2803]: I1108 00:25:45.737556 2803 container_manager_linux.go:306] "Creating device plugin manager" Nov 8 00:25:45.737794 kubelet[2803]: I1108 00:25:45.737674 2803 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 8 00:25:45.739999 kubelet[2803]: I1108 00:25:45.739960 2803 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:25:45.741993 kubelet[2803]: I1108 00:25:45.741839 2803 kubelet.go:475] "Attempting to sync node with API server" Nov 8 00:25:45.741993 kubelet[2803]: I1108 00:25:45.741893 2803 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:25:45.742717 kubelet[2803]: E1108 00:25:45.742666 2803 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.25.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-121&limit=500&resourceVersion=0\": dial tcp 172.31.25.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:25:45.743495 kubelet[2803]: I1108 00:25:45.743318 2803 kubelet.go:387] "Adding apiserver pod source" Nov 8 00:25:45.743495 kubelet[2803]: I1108 00:25:45.743348 2803 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:25:45.746725 kubelet[2803]: I1108 00:25:45.746262 2803 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:25:45.749061 kubelet[2803]: I1108 00:25:45.749030 2803 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:25:45.749159 kubelet[2803]: I1108 00:25:45.749078 2803 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 8 00:25:45.749159 kubelet[2803]: W1108 00:25:45.749137 2803 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:25:45.753084 kubelet[2803]: I1108 00:25:45.753057 2803 server.go:1262] "Started kubelet" Nov 8 00:25:45.754940 kubelet[2803]: E1108 00:25:45.753312 2803 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.25.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:25:45.766323 kubelet[2803]: I1108 00:25:45.766254 2803 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:25:45.771170 kubelet[2803]: I1108 00:25:45.770848 2803 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:25:45.773568 kubelet[2803]: I1108 00:25:45.771441 2803 server.go:310] "Adding debug handlers to kubelet server" Nov 8 00:25:45.773568 kubelet[2803]: I1108 00:25:45.772519 2803 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:25:45.786982 kubelet[2803]: I1108 00:25:45.786867 2803 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:25:45.787345 kubelet[2803]: I1108 00:25:45.787324 2803 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 8 00:25:45.787917 kubelet[2803]: I1108 00:25:45.787881 2803 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:25:45.789736 kubelet[2803]: E1108 00:25:45.787661 2803 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.25.121:6443/api/v1/namespaces/default/events\": dial tcp 172.31.25.121:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-25-121.1875e0632d5f63e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-121,UID:ip-172-31-25-121,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-121,},FirstTimestamp:2025-11-08 00:25:45.753027554 +0000 UTC m=+0.972696195,LastTimestamp:2025-11-08 00:25:45.753027554 +0000 UTC m=+0.972696195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-121,}" Nov 8 00:25:45.789736 kubelet[2803]: I1108 00:25:45.775187 2803 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 8 00:25:45.789924 kubelet[2803]: I1108 00:25:45.789789 2803 reconciler.go:29] "Reconciler: start to sync state" Nov 8 00:25:45.790035 kubelet[2803]: E1108 00:25:45.775541 2803 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-25-121\" not found" Nov 8 00:25:45.790407 kubelet[2803]: E1108 00:25:45.790382 2803 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.25.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:25:45.790861 kubelet[2803]: E1108 00:25:45.790821 2803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-121?timeout=10s\": dial tcp 172.31.25.121:6443: connect: connection refused" interval="200ms" Nov 8 00:25:45.791456 kubelet[2803]: I1108 00:25:45.775204 2803 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 8 00:25:45.792045 kubelet[2803]: I1108 00:25:45.791992 2803 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:25:45.792314 kubelet[2803]: I1108 00:25:45.792218 2803 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:25:45.797319 kubelet[2803]: I1108 00:25:45.796119 2803 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:25:45.817442 kubelet[2803]: I1108 00:25:45.817380 2803 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 8 00:25:45.819241 kubelet[2803]: I1108 00:25:45.819039 2803 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 8 00:25:45.819241 kubelet[2803]: I1108 00:25:45.819177 2803 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 8 00:25:45.819241 kubelet[2803]: I1108 00:25:45.819213 2803 kubelet.go:2427] "Starting kubelet main sync loop" Nov 8 00:25:45.819450 kubelet[2803]: E1108 00:25:45.819267 2803 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:25:45.828132 kubelet[2803]: E1108 00:25:45.828105 2803 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:25:45.831832 kubelet[2803]: E1108 00:25:45.831799 2803 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.25.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:25:45.840919 kubelet[2803]: I1108 00:25:45.840895 2803 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:25:45.841411 kubelet[2803]: I1108 00:25:45.841391 2803 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:25:45.841545 kubelet[2803]: I1108 00:25:45.841535 2803 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:25:45.844960 kubelet[2803]: I1108 00:25:45.844943 2803 policy_none.go:49] "None policy: Start" Nov 8 00:25:45.845077 kubelet[2803]: I1108 00:25:45.845069 2803 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 8 00:25:45.845143 kubelet[2803]: I1108 00:25:45.845135 2803 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 8 00:25:45.846931 kubelet[2803]: I1108 00:25:45.846913 2803 policy_none.go:47] "Start" Nov 8 00:25:45.852562 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:25:45.869035 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:25:45.872700 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:25:45.881350 kubelet[2803]: E1108 00:25:45.881320 2803 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:25:45.881993 kubelet[2803]: I1108 00:25:45.881571 2803 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:25:45.881993 kubelet[2803]: I1108 00:25:45.881587 2803 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:25:45.883871 kubelet[2803]: E1108 00:25:45.883832 2803 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:25:45.884080 kubelet[2803]: E1108 00:25:45.884058 2803 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-25-121\" not found" Nov 8 00:25:45.884935 kubelet[2803]: I1108 00:25:45.884832 2803 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:25:45.935634 systemd[1]: Created slice kubepods-burstable-pod2f7f47a3cbcc928d1d002f56b2dcacbf.slice - libcontainer container kubepods-burstable-pod2f7f47a3cbcc928d1d002f56b2dcacbf.slice. Nov 8 00:25:45.958307 kubelet[2803]: E1108 00:25:45.958169 2803 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-121\" not found" node="ip-172-31-25-121" Nov 8 00:25:45.964141 systemd[1]: Created slice kubepods-burstable-podda04cde5f77fdaf74b3ef0668f4217a8.slice - libcontainer container kubepods-burstable-podda04cde5f77fdaf74b3ef0668f4217a8.slice. Nov 8 00:25:45.966650 kubelet[2803]: E1108 00:25:45.966614 2803 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-121\" not found" node="ip-172-31-25-121" Nov 8 00:25:45.969780 systemd[1]: Created slice kubepods-burstable-podd3a9cb18bf44041435735bb755d8f0b1.slice - libcontainer container kubepods-burstable-podd3a9cb18bf44041435735bb755d8f0b1.slice. Nov 8 00:25:45.971914 kubelet[2803]: E1108 00:25:45.971884 2803 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-121\" not found" node="ip-172-31-25-121" Nov 8 00:25:45.983719 kubelet[2803]: I1108 00:25:45.983682 2803 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-121" Nov 8 00:25:45.984024 kubelet[2803]: E1108 00:25:45.984000 2803 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.121:6443/api/v1/nodes\": dial tcp 172.31.25.121:6443: connect: connection refused" node="ip-172-31-25-121" Nov 8 00:25:45.991668 kubelet[2803]: E1108 00:25:45.991625 2803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-121?timeout=10s\": dial tcp 172.31.25.121:6443: connect: connection refused" interval="400ms" Nov 8 00:25:46.092448 kubelet[2803]: I1108 00:25:46.092357 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3a9cb18bf44041435735bb755d8f0b1-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-121\" (UID: \"d3a9cb18bf44041435735bb755d8f0b1\") " pod="kube-system/kube-scheduler-ip-172-31-25-121" Nov 8 00:25:46.092448 kubelet[2803]: I1108 00:25:46.092395 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f7f47a3cbcc928d1d002f56b2dcacbf-ca-certs\") pod \"kube-apiserver-ip-172-31-25-121\" (UID: \"2f7f47a3cbcc928d1d002f56b2dcacbf\") " pod="kube-system/kube-apiserver-ip-172-31-25-121" Nov 8 00:25:46.092448 kubelet[2803]: I1108 00:25:46.092414 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f7f47a3cbcc928d1d002f56b2dcacbf-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-121\" (UID: \"2f7f47a3cbcc928d1d002f56b2dcacbf\") " pod="kube-system/kube-apiserver-ip-172-31-25-121" Nov 8 00:25:46.092448 kubelet[2803]: I1108 00:25:46.092430 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da04cde5f77fdaf74b3ef0668f4217a8-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-121\" (UID: \"da04cde5f77fdaf74b3ef0668f4217a8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-121" Nov 8 00:25:46.092448 kubelet[2803]: I1108 00:25:46.092460 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/da04cde5f77fdaf74b3ef0668f4217a8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-121\" (UID: \"da04cde5f77fdaf74b3ef0668f4217a8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-121" Nov 8 00:25:46.092703 kubelet[2803]: I1108 00:25:46.092479 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f7f47a3cbcc928d1d002f56b2dcacbf-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-121\" (UID: \"2f7f47a3cbcc928d1d002f56b2dcacbf\") " pod="kube-system/kube-apiserver-ip-172-31-25-121" Nov 8 00:25:46.092703 kubelet[2803]: I1108 00:25:46.092493 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da04cde5f77fdaf74b3ef0668f4217a8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-121\" (UID: \"da04cde5f77fdaf74b3ef0668f4217a8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-121" Nov 8 00:25:46.092703 kubelet[2803]: I1108 00:25:46.092506 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/da04cde5f77fdaf74b3ef0668f4217a8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-121\" (UID: \"da04cde5f77fdaf74b3ef0668f4217a8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-121" Nov 8 00:25:46.092703 kubelet[2803]: I1108 00:25:46.092520 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da04cde5f77fdaf74b3ef0668f4217a8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-121\" (UID: \"da04cde5f77fdaf74b3ef0668f4217a8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-121" Nov 8 00:25:46.186297 kubelet[2803]: I1108 00:25:46.186250 2803 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-121" Nov 8 00:25:46.186689 kubelet[2803]: E1108 00:25:46.186657 2803 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.121:6443/api/v1/nodes\": dial tcp 172.31.25.121:6443: connect: connection refused" node="ip-172-31-25-121" Nov 8 00:25:46.261778 containerd[1986]: time="2025-11-08T00:25:46.261669138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-121,Uid:2f7f47a3cbcc928d1d002f56b2dcacbf,Namespace:kube-system,Attempt:0,}" Nov 8 00:25:46.276663 containerd[1986]: time="2025-11-08T00:25:46.276606356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-121,Uid:d3a9cb18bf44041435735bb755d8f0b1,Namespace:kube-system,Attempt:0,}" Nov 8 00:25:46.276983 containerd[1986]: time="2025-11-08T00:25:46.276952768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-121,Uid:da04cde5f77fdaf74b3ef0668f4217a8,Namespace:kube-system,Attempt:0,}" Nov 8 00:25:46.392164 kubelet[2803]: E1108 00:25:46.392118 2803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-121?timeout=10s\": dial tcp 172.31.25.121:6443: connect: connection refused" interval="800ms" Nov 8 00:25:46.591413 kubelet[2803]: I1108 00:25:46.589226 2803 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-121" Nov 8 00:25:46.591413 kubelet[2803]: E1108 00:25:46.589557 2803 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.121:6443/api/v1/nodes\": dial tcp 172.31.25.121:6443: connect: connection refused" node="ip-172-31-25-121" Nov 8 00:25:46.692121 kubelet[2803]: E1108 00:25:46.692074 2803 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.25.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:25:46.724095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3464700223.mount: Deactivated successfully. Nov 8 00:25:46.730329 containerd[1986]: time="2025-11-08T00:25:46.730247371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:25:46.732735 containerd[1986]: time="2025-11-08T00:25:46.732679920Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 00:25:46.733904 containerd[1986]: time="2025-11-08T00:25:46.733858935Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:25:46.734705 containerd[1986]: time="2025-11-08T00:25:46.734666725Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:25:46.735538 containerd[1986]: time="2025-11-08T00:25:46.735503970Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:25:46.736562 containerd[1986]: time="2025-11-08T00:25:46.736476041Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:25:46.737581 containerd[1986]: time="2025-11-08T00:25:46.737339307Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:25:46.740314 containerd[1986]: time="2025-11-08T00:25:46.738982965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:25:46.741028 containerd[1986]: time="2025-11-08T00:25:46.740994651Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 464.281115ms" Nov 8 00:25:46.743994 containerd[1986]: time="2025-11-08T00:25:46.743957170Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 482.215086ms" Nov 8 00:25:46.747002 containerd[1986]: time="2025-11-08T00:25:46.746971204Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 469.963644ms" Nov 8 00:25:46.879638 kubelet[2803]: E1108 00:25:46.879596 2803 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.25.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:25:46.947627 kubelet[2803]: E1108 00:25:46.947542 2803 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.25.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-121&limit=500&resourceVersion=0\": dial tcp 172.31.25.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:25:47.025386 containerd[1986]: time="2025-11-08T00:25:47.024122659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:25:47.025386 containerd[1986]: time="2025-11-08T00:25:47.024192288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:25:47.025386 containerd[1986]: time="2025-11-08T00:25:47.024227157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:47.025386 containerd[1986]: time="2025-11-08T00:25:47.025175242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:47.034684 containerd[1986]: time="2025-11-08T00:25:47.034425418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:25:47.040617 containerd[1986]: time="2025-11-08T00:25:47.040178635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:25:47.040759 containerd[1986]: time="2025-11-08T00:25:47.040668052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:47.041228 containerd[1986]: time="2025-11-08T00:25:47.041072052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:47.049313 containerd[1986]: time="2025-11-08T00:25:47.048209014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:25:47.050678 containerd[1986]: time="2025-11-08T00:25:47.050339846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:25:47.050678 containerd[1986]: time="2025-11-08T00:25:47.050385370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:47.050678 containerd[1986]: time="2025-11-08T00:25:47.050508392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:47.067524 systemd[1]: Started cri-containerd-db9ef81e4e041679bb36c851b96752c8f6b887a9130ff539ccc19eb7cf2609b8.scope - libcontainer container db9ef81e4e041679bb36c851b96752c8f6b887a9130ff539ccc19eb7cf2609b8. Nov 8 00:25:47.083507 systemd[1]: Started cri-containerd-868519767abb05daa68c361db409ef6de55c56e1ab0563d05342aa3d8a85cf43.scope - libcontainer container 868519767abb05daa68c361db409ef6de55c56e1ab0563d05342aa3d8a85cf43. Nov 8 00:25:47.096449 systemd[1]: Started cri-containerd-9851bfdabb46d918b6f5859dcc82e483c8ea539071813636f63d067127cbc39b.scope - libcontainer container 9851bfdabb46d918b6f5859dcc82e483c8ea539071813636f63d067127cbc39b. Nov 8 00:25:47.193051 kubelet[2803]: E1108 00:25:47.192908 2803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-121?timeout=10s\": dial tcp 172.31.25.121:6443: connect: connection refused" interval="1.6s" Nov 8 00:25:47.199454 containerd[1986]: time="2025-11-08T00:25:47.199308806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-121,Uid:d3a9cb18bf44041435735bb755d8f0b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"db9ef81e4e041679bb36c851b96752c8f6b887a9130ff539ccc19eb7cf2609b8\"" Nov 8 00:25:47.217768 kubelet[2803]: E1108 00:25:47.217730 2803 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.25.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.121:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:25:47.220807 containerd[1986]: time="2025-11-08T00:25:47.220668673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-121,Uid:da04cde5f77fdaf74b3ef0668f4217a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9851bfdabb46d918b6f5859dcc82e483c8ea539071813636f63d067127cbc39b\"" Nov 8 00:25:47.223247 containerd[1986]: time="2025-11-08T00:25:47.222476972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-121,Uid:2f7f47a3cbcc928d1d002f56b2dcacbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"868519767abb05daa68c361db409ef6de55c56e1ab0563d05342aa3d8a85cf43\"" Nov 8 00:25:47.223409 containerd[1986]: time="2025-11-08T00:25:47.223257190Z" level=info msg="CreateContainer within sandbox \"db9ef81e4e041679bb36c851b96752c8f6b887a9130ff539ccc19eb7cf2609b8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:25:47.229970 containerd[1986]: time="2025-11-08T00:25:47.229927103Z" level=info msg="CreateContainer within sandbox \"868519767abb05daa68c361db409ef6de55c56e1ab0563d05342aa3d8a85cf43\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:25:47.233094 containerd[1986]: time="2025-11-08T00:25:47.232776058Z" level=info msg="CreateContainer within sandbox \"9851bfdabb46d918b6f5859dcc82e483c8ea539071813636f63d067127cbc39b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:25:47.254701 containerd[1986]: time="2025-11-08T00:25:47.254510741Z" level=info msg="CreateContainer within sandbox \"db9ef81e4e041679bb36c851b96752c8f6b887a9130ff539ccc19eb7cf2609b8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ea57a97d14d12bcac2e223fd10ebe7816dc6cf015286f1aa51b1c8ede8166401\"" Nov 8 00:25:47.255342 containerd[1986]: time="2025-11-08T00:25:47.255311833Z" level=info msg="StartContainer for \"ea57a97d14d12bcac2e223fd10ebe7816dc6cf015286f1aa51b1c8ede8166401\"" Nov 8 00:25:47.263460 containerd[1986]: time="2025-11-08T00:25:47.263411211Z" level=info msg="CreateContainer within sandbox \"868519767abb05daa68c361db409ef6de55c56e1ab0563d05342aa3d8a85cf43\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b8280f19cb98eabb25065d7f61ad59816d89f78815eaf82ca77c8cd2f18c39ea\"" Nov 8 00:25:47.264851 containerd[1986]: time="2025-11-08T00:25:47.264813232Z" level=info msg="StartContainer for \"b8280f19cb98eabb25065d7f61ad59816d89f78815eaf82ca77c8cd2f18c39ea\"" Nov 8 00:25:47.275544 containerd[1986]: time="2025-11-08T00:25:47.275494287Z" level=info msg="CreateContainer within sandbox \"9851bfdabb46d918b6f5859dcc82e483c8ea539071813636f63d067127cbc39b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1119a806dbc11a0455c3ee113d3894e7c8ae1c9b59342583871cdc6ce48d35df\"" Nov 8 00:25:47.276660 containerd[1986]: time="2025-11-08T00:25:47.276627425Z" level=info msg="StartContainer for \"1119a806dbc11a0455c3ee113d3894e7c8ae1c9b59342583871cdc6ce48d35df\"" Nov 8 00:25:47.309236 systemd[1]: Started cri-containerd-ea57a97d14d12bcac2e223fd10ebe7816dc6cf015286f1aa51b1c8ede8166401.scope - libcontainer container ea57a97d14d12bcac2e223fd10ebe7816dc6cf015286f1aa51b1c8ede8166401. Nov 8 00:25:47.317592 systemd[1]: Started cri-containerd-b8280f19cb98eabb25065d7f61ad59816d89f78815eaf82ca77c8cd2f18c39ea.scope - libcontainer container b8280f19cb98eabb25065d7f61ad59816d89f78815eaf82ca77c8cd2f18c39ea. Nov 8 00:25:47.358502 systemd[1]: Started cri-containerd-1119a806dbc11a0455c3ee113d3894e7c8ae1c9b59342583871cdc6ce48d35df.scope - libcontainer container 1119a806dbc11a0455c3ee113d3894e7c8ae1c9b59342583871cdc6ce48d35df. Nov 8 00:25:47.403650 kubelet[2803]: I1108 00:25:47.403618 2803 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-121" Nov 8 00:25:47.404314 kubelet[2803]: E1108 00:25:47.404005 2803 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.121:6443/api/v1/nodes\": dial tcp 172.31.25.121:6443: connect: connection refused" node="ip-172-31-25-121" Nov 8 00:25:47.424127 containerd[1986]: time="2025-11-08T00:25:47.424074742Z" level=info msg="StartContainer for \"ea57a97d14d12bcac2e223fd10ebe7816dc6cf015286f1aa51b1c8ede8166401\" returns successfully" Nov 8 00:25:47.436855 containerd[1986]: time="2025-11-08T00:25:47.436807624Z" level=info msg="StartContainer for \"b8280f19cb98eabb25065d7f61ad59816d89f78815eaf82ca77c8cd2f18c39ea\" returns successfully" Nov 8 00:25:47.463912 containerd[1986]: time="2025-11-08T00:25:47.463503787Z" level=info msg="StartContainer for \"1119a806dbc11a0455c3ee113d3894e7c8ae1c9b59342583871cdc6ce48d35df\" returns successfully" Nov 8 00:25:47.731239 kubelet[2803]: E1108 00:25:47.731120 2803 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.25.121:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.25.121:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:25:47.845414 kubelet[2803]: E1108 00:25:47.845382 2803 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-121\" not found" node="ip-172-31-25-121" Nov 8 00:25:47.852564 kubelet[2803]: E1108 00:25:47.852532 2803 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-121\" not found" node="ip-172-31-25-121" Nov 8 00:25:47.852949 kubelet[2803]: E1108 00:25:47.852926 2803 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-121\" not found" node="ip-172-31-25-121" Nov 8 00:25:48.856862 kubelet[2803]: E1108 00:25:48.856683 2803 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-121\" not found" node="ip-172-31-25-121" Nov 8 00:25:48.859572 kubelet[2803]: E1108 00:25:48.859320 2803 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-121\" not found" node="ip-172-31-25-121" Nov 8 00:25:49.007786 kubelet[2803]: I1108 00:25:49.006915 2803 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-121" Nov 8 00:25:49.859754 kubelet[2803]: E1108 00:25:49.858586 2803 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-121\" not found" node="ip-172-31-25-121" Nov 8 00:25:50.651667 kubelet[2803]: E1108 00:25:50.651626 2803 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-25-121\" not found" node="ip-172-31-25-121" Nov 8 00:25:50.686425 kubelet[2803]: I1108 00:25:50.686362 2803 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-25-121" Nov 8 00:25:50.748121 kubelet[2803]: I1108 00:25:50.748075 2803 apiserver.go:52] "Watching apiserver" Nov 8 00:25:50.782363 kubelet[2803]: I1108 00:25:50.782326 2803 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-121" Nov 8 00:25:50.791728 kubelet[2803]: I1108 00:25:50.791674 2803 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 8 00:25:50.792435 kubelet[2803]: E1108 00:25:50.792411 2803 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-25-121\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-25-121" Nov 8 00:25:50.792435 kubelet[2803]: I1108 00:25:50.792433 2803 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-121" Nov 8 00:25:50.794193 kubelet[2803]: E1108 00:25:50.794148 2803 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-25-121\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-25-121" Nov 8 00:25:50.794193 kubelet[2803]: I1108 00:25:50.794183 2803 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-121" Nov 8 00:25:50.796230 kubelet[2803]: E1108 00:25:50.796190 2803 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-25-121\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-25-121" Nov 8 00:25:52.770698 systemd[1]: Reloading requested from client PID 3090 ('systemctl') (unit session-9.scope)... Nov 8 00:25:52.770716 systemd[1]: Reloading... Nov 8 00:25:52.882316 zram_generator::config[3130]: No configuration found. Nov 8 00:25:53.026522 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:25:53.127973 systemd[1]: Reloading finished in 356 ms. Nov 8 00:25:53.172513 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:25:53.191899 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:25:53.192214 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:53.192322 systemd[1]: kubelet.service: Consumed 1.446s CPU time, 122.6M memory peak, 0B memory swap peak. Nov 8 00:25:53.200642 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:25:53.500080 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:53.510736 (kubelet)[3190]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:25:53.574359 kubelet[3190]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:25:53.574359 kubelet[3190]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:25:53.576370 kubelet[3190]: I1108 00:25:53.576310 3190 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:25:53.583647 kubelet[3190]: I1108 00:25:53.583610 3190 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 8 00:25:53.583647 kubelet[3190]: I1108 00:25:53.583638 3190 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:25:53.587513 kubelet[3190]: I1108 00:25:53.587461 3190 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 8 00:25:53.587513 kubelet[3190]: I1108 00:25:53.587503 3190 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:25:53.587829 kubelet[3190]: I1108 00:25:53.587807 3190 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:25:53.589049 kubelet[3190]: I1108 00:25:53.589021 3190 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 8 00:25:53.595487 kubelet[3190]: I1108 00:25:53.595458 3190 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:25:53.604332 kubelet[3190]: E1108 00:25:53.603768 3190 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:25:53.604332 kubelet[3190]: I1108 00:25:53.603864 3190 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 8 00:25:53.606529 kubelet[3190]: I1108 00:25:53.606503 3190 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 8 00:25:53.613913 kubelet[3190]: I1108 00:25:53.613824 3190 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:25:53.614179 kubelet[3190]: I1108 00:25:53.613904 3190 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-121","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:25:53.614179 kubelet[3190]: I1108 00:25:53.614150 3190 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:25:53.614179 kubelet[3190]: I1108 00:25:53.614168 3190 container_manager_linux.go:306] "Creating device plugin manager" Nov 8 00:25:53.614523 kubelet[3190]: I1108 00:25:53.614212 3190 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 8 00:25:53.617629 kubelet[3190]: I1108 00:25:53.617598 3190 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:25:53.620656 kubelet[3190]: I1108 00:25:53.620619 3190 kubelet.go:475] "Attempting to sync node with API server" Nov 8 00:25:53.620656 kubelet[3190]: I1108 00:25:53.620655 3190 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:25:53.622116 kubelet[3190]: I1108 00:25:53.622089 3190 kubelet.go:387] "Adding apiserver pod source" Nov 8 00:25:53.622116 kubelet[3190]: I1108 00:25:53.622117 3190 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:25:53.624862 kubelet[3190]: I1108 00:25:53.624434 3190 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:25:53.625777 kubelet[3190]: I1108 00:25:53.625732 3190 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:25:53.625777 kubelet[3190]: I1108 00:25:53.625767 3190 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 8 00:25:53.630381 kubelet[3190]: I1108 00:25:53.630362 3190 server.go:1262] "Started kubelet" Nov 8 00:25:53.632142 kubelet[3190]: I1108 00:25:53.632121 3190 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:25:53.646231 kubelet[3190]: I1108 00:25:53.646195 3190 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:25:53.653151 kubelet[3190]: I1108 00:25:53.653117 3190 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:25:53.653406 kubelet[3190]: I1108 00:25:53.653367 3190 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 8 00:25:53.653828 kubelet[3190]: I1108 00:25:53.653814 3190 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:25:53.659949 kubelet[3190]: I1108 00:25:53.655691 3190 server.go:310] "Adding debug handlers to kubelet server" Nov 8 00:25:53.661804 kubelet[3190]: I1108 00:25:53.658962 3190 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 8 00:25:53.662007 kubelet[3190]: E1108 00:25:53.661984 3190 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:25:53.662007 kubelet[3190]: I1108 00:25:53.659548 3190 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 8 00:25:53.662078 kubelet[3190]: E1108 00:25:53.659714 3190 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-25-121\" not found" Nov 8 00:25:53.662078 kubelet[3190]: I1108 00:25:53.659163 3190 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:25:53.662262 kubelet[3190]: I1108 00:25:53.662229 3190 reconciler.go:29] "Reconciler: start to sync state" Nov 8 00:25:53.666824 kubelet[3190]: I1108 00:25:53.664604 3190 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 8 00:25:53.668494 kubelet[3190]: I1108 00:25:53.668466 3190 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:25:53.673547 kubelet[3190]: I1108 00:25:53.673512 3190 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:25:53.673547 kubelet[3190]: I1108 00:25:53.673532 3190 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:25:53.683590 kubelet[3190]: I1108 00:25:53.683279 3190 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 8 00:25:53.683590 kubelet[3190]: I1108 00:25:53.683320 3190 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 8 00:25:53.683590 kubelet[3190]: I1108 00:25:53.683341 3190 kubelet.go:2427] "Starting kubelet main sync loop" Nov 8 00:25:53.683590 kubelet[3190]: E1108 00:25:53.683381 3190 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:25:53.733430 kubelet[3190]: I1108 00:25:53.733407 3190 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:25:53.733703 kubelet[3190]: I1108 00:25:53.733613 3190 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:25:53.733703 kubelet[3190]: I1108 00:25:53.733640 3190 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:25:53.734942 kubelet[3190]: I1108 00:25:53.733918 3190 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:25:53.734942 kubelet[3190]: I1108 00:25:53.733933 3190 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:25:53.734942 kubelet[3190]: I1108 00:25:53.733955 3190 policy_none.go:49] "None policy: Start" Nov 8 00:25:53.734942 kubelet[3190]: I1108 00:25:53.733968 3190 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 8 00:25:53.734942 kubelet[3190]: I1108 00:25:53.733980 3190 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 8 00:25:53.734942 kubelet[3190]: I1108 00:25:53.734097 3190 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 8 00:25:53.734942 kubelet[3190]: I1108 00:25:53.734108 3190 policy_none.go:47] "Start" Nov 8 00:25:53.740032 kubelet[3190]: E1108 00:25:53.739997 3190 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:25:53.740243 kubelet[3190]: I1108 00:25:53.740201 3190 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:25:53.740243 kubelet[3190]: I1108 00:25:53.740223 3190 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:25:53.742061 kubelet[3190]: I1108 00:25:53.740862 3190 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:25:53.747331 kubelet[3190]: E1108 00:25:53.746764 3190 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:25:53.784540 kubelet[3190]: I1108 00:25:53.784423 3190 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-121" Nov 8 00:25:53.786531 kubelet[3190]: I1108 00:25:53.786502 3190 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-121" Nov 8 00:25:53.786927 kubelet[3190]: I1108 00:25:53.786899 3190 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-121" Nov 8 00:25:53.842568 kubelet[3190]: I1108 00:25:53.842533 3190 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-121" Nov 8 00:25:53.851656 kubelet[3190]: I1108 00:25:53.851620 3190 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-25-121" Nov 8 00:25:53.851794 kubelet[3190]: I1108 00:25:53.851721 3190 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-25-121" Nov 8 00:25:53.901624 update_engine[1968]: I20251108 00:25:53.901551 1968 update_attempter.cc:509] Updating boot flags... Nov 8 00:25:53.965528 kubelet[3190]: I1108 00:25:53.965494 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f7f47a3cbcc928d1d002f56b2dcacbf-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-121\" (UID: \"2f7f47a3cbcc928d1d002f56b2dcacbf\") " pod="kube-system/kube-apiserver-ip-172-31-25-121" Nov 8 00:25:53.965658 kubelet[3190]: I1108 00:25:53.965536 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da04cde5f77fdaf74b3ef0668f4217a8-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-121\" (UID: \"da04cde5f77fdaf74b3ef0668f4217a8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-121" Nov 8 00:25:53.965658 kubelet[3190]: I1108 00:25:53.965560 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/da04cde5f77fdaf74b3ef0668f4217a8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-121\" (UID: \"da04cde5f77fdaf74b3ef0668f4217a8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-121" Nov 8 00:25:53.965658 kubelet[3190]: I1108 00:25:53.965581 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da04cde5f77fdaf74b3ef0668f4217a8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-121\" (UID: \"da04cde5f77fdaf74b3ef0668f4217a8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-121" Nov 8 00:25:53.965658 kubelet[3190]: I1108 00:25:53.965606 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/da04cde5f77fdaf74b3ef0668f4217a8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-121\" (UID: \"da04cde5f77fdaf74b3ef0668f4217a8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-121" Nov 8 00:25:53.965658 kubelet[3190]: I1108 00:25:53.965629 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da04cde5f77fdaf74b3ef0668f4217a8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-121\" (UID: \"da04cde5f77fdaf74b3ef0668f4217a8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-121" Nov 8 00:25:53.966476 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3241) Nov 8 00:25:53.966928 kubelet[3190]: I1108 00:25:53.965660 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f7f47a3cbcc928d1d002f56b2dcacbf-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-121\" (UID: \"2f7f47a3cbcc928d1d002f56b2dcacbf\") " pod="kube-system/kube-apiserver-ip-172-31-25-121" Nov 8 00:25:53.966928 kubelet[3190]: I1108 00:25:53.965691 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3a9cb18bf44041435735bb755d8f0b1-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-121\" (UID: \"d3a9cb18bf44041435735bb755d8f0b1\") " pod="kube-system/kube-scheduler-ip-172-31-25-121" Nov 8 00:25:53.966928 kubelet[3190]: I1108 00:25:53.965720 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f7f47a3cbcc928d1d002f56b2dcacbf-ca-certs\") pod \"kube-apiserver-ip-172-31-25-121\" (UID: \"2f7f47a3cbcc928d1d002f56b2dcacbf\") " pod="kube-system/kube-apiserver-ip-172-31-25-121" Nov 8 00:25:54.237417 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3256) Nov 8 00:25:54.633799 kubelet[3190]: I1108 00:25:54.633745 3190 apiserver.go:52] "Watching apiserver" Nov 8 00:25:54.663643 kubelet[3190]: I1108 00:25:54.662645 3190 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 8 00:25:54.717099 kubelet[3190]: I1108 00:25:54.717061 3190 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-121" Nov 8 00:25:54.725344 kubelet[3190]: E1108 00:25:54.725308 3190 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-25-121\" already exists" pod="kube-system/kube-scheduler-ip-172-31-25-121" Nov 8 00:25:54.755496 kubelet[3190]: I1108 00:25:54.755421 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-25-121" podStartSLOduration=1.755406551 podStartE2EDuration="1.755406551s" podCreationTimestamp="2025-11-08 00:25:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:25:54.745846006 +0000 UTC m=+1.228740398" watchObservedRunningTime="2025-11-08 00:25:54.755406551 +0000 UTC m=+1.238300939" Nov 8 00:25:54.767938 kubelet[3190]: I1108 00:25:54.767863 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-25-121" podStartSLOduration=1.767841755 podStartE2EDuration="1.767841755s" podCreationTimestamp="2025-11-08 00:25:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:25:54.756103077 +0000 UTC m=+1.238997468" watchObservedRunningTime="2025-11-08 00:25:54.767841755 +0000 UTC m=+1.250736146" Nov 8 00:25:59.240361 kubelet[3190]: I1108 00:25:59.240322 3190 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:25:59.241173 containerd[1986]: time="2025-11-08T00:25:59.240667489Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:25:59.242924 kubelet[3190]: I1108 00:25:59.241893 3190 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:25:59.322760 kubelet[3190]: I1108 00:25:59.322224 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-25-121" podStartSLOduration=6.322208518 podStartE2EDuration="6.322208518s" podCreationTimestamp="2025-11-08 00:25:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:25:54.768679868 +0000 UTC m=+1.251574259" watchObservedRunningTime="2025-11-08 00:25:59.322208518 +0000 UTC m=+5.805102907" Nov 8 00:26:00.172863 systemd[1]: Created slice kubepods-besteffort-poddc34f406_cd94_4085_9b2d_b4d792566730.slice - libcontainer container kubepods-besteffort-poddc34f406_cd94_4085_9b2d_b4d792566730.slice. Nov 8 00:26:00.226891 kubelet[3190]: I1108 00:26:00.226845 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dc34f406-cd94-4085-9b2d-b4d792566730-kube-proxy\") pod \"kube-proxy-jj9t9\" (UID: \"dc34f406-cd94-4085-9b2d-b4d792566730\") " pod="kube-system/kube-proxy-jj9t9" Nov 8 00:26:00.227070 kubelet[3190]: I1108 00:26:00.226901 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc34f406-cd94-4085-9b2d-b4d792566730-xtables-lock\") pod \"kube-proxy-jj9t9\" (UID: \"dc34f406-cd94-4085-9b2d-b4d792566730\") " pod="kube-system/kube-proxy-jj9t9" Nov 8 00:26:00.227070 kubelet[3190]: I1108 00:26:00.226922 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc34f406-cd94-4085-9b2d-b4d792566730-lib-modules\") pod \"kube-proxy-jj9t9\" (UID: \"dc34f406-cd94-4085-9b2d-b4d792566730\") " pod="kube-system/kube-proxy-jj9t9" Nov 8 00:26:00.227070 kubelet[3190]: I1108 00:26:00.226944 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgnqh\" (UniqueName: \"kubernetes.io/projected/dc34f406-cd94-4085-9b2d-b4d792566730-kube-api-access-sgnqh\") pod \"kube-proxy-jj9t9\" (UID: \"dc34f406-cd94-4085-9b2d-b4d792566730\") " pod="kube-system/kube-proxy-jj9t9" Nov 8 00:26:00.347630 systemd[1]: Created slice kubepods-besteffort-podce15cdf6_d79a_45c3_b348_04df18c498e8.slice - libcontainer container kubepods-besteffort-podce15cdf6_d79a_45c3_b348_04df18c498e8.slice. Nov 8 00:26:00.428858 kubelet[3190]: I1108 00:26:00.428718 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8dh9\" (UniqueName: \"kubernetes.io/projected/ce15cdf6-d79a-45c3-b348-04df18c498e8-kube-api-access-z8dh9\") pod \"tigera-operator-65cdcdfd6d-v7m75\" (UID: \"ce15cdf6-d79a-45c3-b348-04df18c498e8\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-v7m75" Nov 8 00:26:00.428858 kubelet[3190]: I1108 00:26:00.428770 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ce15cdf6-d79a-45c3-b348-04df18c498e8-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-v7m75\" (UID: \"ce15cdf6-d79a-45c3-b348-04df18c498e8\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-v7m75" Nov 8 00:26:00.494171 containerd[1986]: time="2025-11-08T00:26:00.494129686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jj9t9,Uid:dc34f406-cd94-4085-9b2d-b4d792566730,Namespace:kube-system,Attempt:0,}" Nov 8 00:26:00.525484 containerd[1986]: time="2025-11-08T00:26:00.525126830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:00.526058 containerd[1986]: time="2025-11-08T00:26:00.525960673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:00.526395 containerd[1986]: time="2025-11-08T00:26:00.526162883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:00.526395 containerd[1986]: time="2025-11-08T00:26:00.526277480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:00.564501 systemd[1]: Started cri-containerd-20189302a9daac37ff2b4c6eff7631db893369f5a620c79bf11c815f412081e5.scope - libcontainer container 20189302a9daac37ff2b4c6eff7631db893369f5a620c79bf11c815f412081e5. Nov 8 00:26:00.592508 containerd[1986]: time="2025-11-08T00:26:00.592475784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jj9t9,Uid:dc34f406-cd94-4085-9b2d-b4d792566730,Namespace:kube-system,Attempt:0,} returns sandbox id \"20189302a9daac37ff2b4c6eff7631db893369f5a620c79bf11c815f412081e5\"" Nov 8 00:26:00.602338 containerd[1986]: time="2025-11-08T00:26:00.602299500Z" level=info msg="CreateContainer within sandbox \"20189302a9daac37ff2b4c6eff7631db893369f5a620c79bf11c815f412081e5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:26:00.629516 containerd[1986]: time="2025-11-08T00:26:00.629470801Z" level=info msg="CreateContainer within sandbox \"20189302a9daac37ff2b4c6eff7631db893369f5a620c79bf11c815f412081e5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f851f9527020e2c38f696f1f2eccced9209b93eafa7050235ef23999a056efad\"" Nov 8 00:26:00.630987 containerd[1986]: time="2025-11-08T00:26:00.630480311Z" level=info msg="StartContainer for \"f851f9527020e2c38f696f1f2eccced9209b93eafa7050235ef23999a056efad\"" Nov 8 00:26:00.657131 containerd[1986]: time="2025-11-08T00:26:00.656826026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-v7m75,Uid:ce15cdf6-d79a-45c3-b348-04df18c498e8,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:26:00.664142 systemd[1]: Started cri-containerd-f851f9527020e2c38f696f1f2eccced9209b93eafa7050235ef23999a056efad.scope - libcontainer container f851f9527020e2c38f696f1f2eccced9209b93eafa7050235ef23999a056efad. Nov 8 00:26:00.707937 containerd[1986]: time="2025-11-08T00:26:00.707692316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:00.707937 containerd[1986]: time="2025-11-08T00:26:00.707762860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:00.708199 containerd[1986]: time="2025-11-08T00:26:00.707805880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:00.708199 containerd[1986]: time="2025-11-08T00:26:00.707930363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:00.723437 containerd[1986]: time="2025-11-08T00:26:00.723393016Z" level=info msg="StartContainer for \"f851f9527020e2c38f696f1f2eccced9209b93eafa7050235ef23999a056efad\" returns successfully" Nov 8 00:26:00.748730 systemd[1]: Started cri-containerd-629b45647cc9c0fd45636fe8c94813df70e00d2d804c6ba42537e7e5561a7fd6.scope - libcontainer container 629b45647cc9c0fd45636fe8c94813df70e00d2d804c6ba42537e7e5561a7fd6. Nov 8 00:26:00.807467 containerd[1986]: time="2025-11-08T00:26:00.807426224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-v7m75,Uid:ce15cdf6-d79a-45c3-b348-04df18c498e8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"629b45647cc9c0fd45636fe8c94813df70e00d2d804c6ba42537e7e5561a7fd6\"" Nov 8 00:26:00.809499 containerd[1986]: time="2025-11-08T00:26:00.809458772Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:26:01.438250 kubelet[3190]: I1108 00:26:01.435796 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jj9t9" podStartSLOduration=1.4347716529999999 podStartE2EDuration="1.434771653s" podCreationTimestamp="2025-11-08 00:26:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:26:00.758795641 +0000 UTC m=+7.241690032" watchObservedRunningTime="2025-11-08 00:26:01.434771653 +0000 UTC m=+7.917666046" Nov 8 00:26:02.500683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1560571208.mount: Deactivated successfully. Nov 8 00:26:03.720251 containerd[1986]: time="2025-11-08T00:26:03.719507056Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:03.720251 containerd[1986]: time="2025-11-08T00:26:03.720192781Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:26:03.721388 containerd[1986]: time="2025-11-08T00:26:03.721355583Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:03.724345 containerd[1986]: time="2025-11-08T00:26:03.724235098Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:03.725391 containerd[1986]: time="2025-11-08T00:26:03.724836251Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.915286892s" Nov 8 00:26:03.725391 containerd[1986]: time="2025-11-08T00:26:03.724875446Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:26:03.730436 containerd[1986]: time="2025-11-08T00:26:03.730406047Z" level=info msg="CreateContainer within sandbox \"629b45647cc9c0fd45636fe8c94813df70e00d2d804c6ba42537e7e5561a7fd6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:26:03.748887 containerd[1986]: time="2025-11-08T00:26:03.748846046Z" level=info msg="CreateContainer within sandbox \"629b45647cc9c0fd45636fe8c94813df70e00d2d804c6ba42537e7e5561a7fd6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6c64a388bc5650f1366e14832b87487ccbf1d6e0f7487942d7779cc8b40707ac\"" Nov 8 00:26:03.751252 containerd[1986]: time="2025-11-08T00:26:03.749773345Z" level=info msg="StartContainer for \"6c64a388bc5650f1366e14832b87487ccbf1d6e0f7487942d7779cc8b40707ac\"" Nov 8 00:26:03.801522 systemd[1]: Started cri-containerd-6c64a388bc5650f1366e14832b87487ccbf1d6e0f7487942d7779cc8b40707ac.scope - libcontainer container 6c64a388bc5650f1366e14832b87487ccbf1d6e0f7487942d7779cc8b40707ac. Nov 8 00:26:03.860908 containerd[1986]: time="2025-11-08T00:26:03.860568227Z" level=info msg="StartContainer for \"6c64a388bc5650f1366e14832b87487ccbf1d6e0f7487942d7779cc8b40707ac\" returns successfully" Nov 8 00:26:03.906219 kubelet[3190]: I1108 00:26:03.906160 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-v7m75" podStartSLOduration=0.989184022 podStartE2EDuration="3.906145711s" podCreationTimestamp="2025-11-08 00:26:00 +0000 UTC" firstStartedPulling="2025-11-08 00:26:00.808959687 +0000 UTC m=+7.291854058" lastFinishedPulling="2025-11-08 00:26:03.725921379 +0000 UTC m=+10.208815747" observedRunningTime="2025-11-08 00:26:03.90534769 +0000 UTC m=+10.388242082" watchObservedRunningTime="2025-11-08 00:26:03.906145711 +0000 UTC m=+10.389040104" Nov 8 00:26:10.930874 sudo[2315]: pam_unix(sudo:session): session closed for user root Nov 8 00:26:10.954990 sshd[2312]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:10.961690 systemd[1]: sshd@8-172.31.25.121:22-139.178.89.65:37298.service: Deactivated successfully. Nov 8 00:26:10.968102 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:26:10.969820 systemd[1]: session-9.scope: Consumed 6.072s CPU time, 144.9M memory peak, 0B memory swap peak. Nov 8 00:26:10.971580 systemd-logind[1962]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:26:10.976250 systemd-logind[1962]: Removed session 9. Nov 8 00:26:17.133572 systemd[1]: Created slice kubepods-besteffort-podbf5acb40_54cf_41b3_8ff0_4acda311e3cd.slice - libcontainer container kubepods-besteffort-podbf5acb40_54cf_41b3_8ff0_4acda311e3cd.slice. Nov 8 00:26:17.189457 kubelet[3190]: I1108 00:26:17.189211 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m278l\" (UniqueName: \"kubernetes.io/projected/bf5acb40-54cf-41b3-8ff0-4acda311e3cd-kube-api-access-m278l\") pod \"calico-typha-cb5897868-z6kds\" (UID: \"bf5acb40-54cf-41b3-8ff0-4acda311e3cd\") " pod="calico-system/calico-typha-cb5897868-z6kds" Nov 8 00:26:17.189457 kubelet[3190]: I1108 00:26:17.189345 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bf5acb40-54cf-41b3-8ff0-4acda311e3cd-typha-certs\") pod \"calico-typha-cb5897868-z6kds\" (UID: \"bf5acb40-54cf-41b3-8ff0-4acda311e3cd\") " pod="calico-system/calico-typha-cb5897868-z6kds" Nov 8 00:26:17.189457 kubelet[3190]: I1108 00:26:17.189379 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf5acb40-54cf-41b3-8ff0-4acda311e3cd-tigera-ca-bundle\") pod \"calico-typha-cb5897868-z6kds\" (UID: \"bf5acb40-54cf-41b3-8ff0-4acda311e3cd\") " pod="calico-system/calico-typha-cb5897868-z6kds" Nov 8 00:26:17.368112 systemd[1]: Created slice kubepods-besteffort-podf9e26acf_a45a_45a5_bff5_6dd223b8495f.slice - libcontainer container kubepods-besteffort-podf9e26acf_a45a_45a5_bff5_6dd223b8495f.slice. Nov 8 00:26:17.392073 kubelet[3190]: I1108 00:26:17.391653 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f9e26acf-a45a-45a5-bff5-6dd223b8495f-cni-net-dir\") pod \"calico-node-x5qf9\" (UID: \"f9e26acf-a45a-45a5-bff5-6dd223b8495f\") " pod="calico-system/calico-node-x5qf9" Nov 8 00:26:17.392073 kubelet[3190]: I1108 00:26:17.391692 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cksvk\" (UniqueName: \"kubernetes.io/projected/f9e26acf-a45a-45a5-bff5-6dd223b8495f-kube-api-access-cksvk\") pod \"calico-node-x5qf9\" (UID: \"f9e26acf-a45a-45a5-bff5-6dd223b8495f\") " pod="calico-system/calico-node-x5qf9" Nov 8 00:26:17.392073 kubelet[3190]: I1108 00:26:17.391717 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f9e26acf-a45a-45a5-bff5-6dd223b8495f-var-lib-calico\") pod \"calico-node-x5qf9\" (UID: \"f9e26acf-a45a-45a5-bff5-6dd223b8495f\") " pod="calico-system/calico-node-x5qf9" Nov 8 00:26:17.392073 kubelet[3190]: I1108 00:26:17.391742 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f9e26acf-a45a-45a5-bff5-6dd223b8495f-cni-bin-dir\") pod \"calico-node-x5qf9\" (UID: \"f9e26acf-a45a-45a5-bff5-6dd223b8495f\") " pod="calico-system/calico-node-x5qf9" Nov 8 00:26:17.392073 kubelet[3190]: I1108 00:26:17.391762 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f9e26acf-a45a-45a5-bff5-6dd223b8495f-flexvol-driver-host\") pod \"calico-node-x5qf9\" (UID: \"f9e26acf-a45a-45a5-bff5-6dd223b8495f\") " pod="calico-system/calico-node-x5qf9" Nov 8 00:26:17.392477 kubelet[3190]: I1108 00:26:17.391776 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9e26acf-a45a-45a5-bff5-6dd223b8495f-lib-modules\") pod \"calico-node-x5qf9\" (UID: \"f9e26acf-a45a-45a5-bff5-6dd223b8495f\") " pod="calico-system/calico-node-x5qf9" Nov 8 00:26:17.392477 kubelet[3190]: I1108 00:26:17.391789 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f9e26acf-a45a-45a5-bff5-6dd223b8495f-var-run-calico\") pod \"calico-node-x5qf9\" (UID: \"f9e26acf-a45a-45a5-bff5-6dd223b8495f\") " pod="calico-system/calico-node-x5qf9" Nov 8 00:26:17.392477 kubelet[3190]: I1108 00:26:17.391803 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9e26acf-a45a-45a5-bff5-6dd223b8495f-xtables-lock\") pod \"calico-node-x5qf9\" (UID: \"f9e26acf-a45a-45a5-bff5-6dd223b8495f\") " pod="calico-system/calico-node-x5qf9" Nov 8 00:26:17.392477 kubelet[3190]: I1108 00:26:17.391830 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9e26acf-a45a-45a5-bff5-6dd223b8495f-tigera-ca-bundle\") pod \"calico-node-x5qf9\" (UID: \"f9e26acf-a45a-45a5-bff5-6dd223b8495f\") " pod="calico-system/calico-node-x5qf9" Nov 8 00:26:17.392477 kubelet[3190]: I1108 00:26:17.391847 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f9e26acf-a45a-45a5-bff5-6dd223b8495f-cni-log-dir\") pod \"calico-node-x5qf9\" (UID: \"f9e26acf-a45a-45a5-bff5-6dd223b8495f\") " pod="calico-system/calico-node-x5qf9" Nov 8 00:26:17.392767 kubelet[3190]: I1108 00:26:17.391864 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f9e26acf-a45a-45a5-bff5-6dd223b8495f-node-certs\") pod \"calico-node-x5qf9\" (UID: \"f9e26acf-a45a-45a5-bff5-6dd223b8495f\") " pod="calico-system/calico-node-x5qf9" Nov 8 00:26:17.392767 kubelet[3190]: I1108 00:26:17.391884 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f9e26acf-a45a-45a5-bff5-6dd223b8495f-policysync\") pod \"calico-node-x5qf9\" (UID: \"f9e26acf-a45a-45a5-bff5-6dd223b8495f\") " pod="calico-system/calico-node-x5qf9" Nov 8 00:26:17.472642 containerd[1986]: time="2025-11-08T00:26:17.472596112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cb5897868-z6kds,Uid:bf5acb40-54cf-41b3-8ff0-4acda311e3cd,Namespace:calico-system,Attempt:0,}" Nov 8 00:26:17.500124 kubelet[3190]: E1108 00:26:17.499820 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.500124 kubelet[3190]: W1108 00:26:17.499860 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.500124 kubelet[3190]: E1108 00:26:17.499906 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.501330 kubelet[3190]: E1108 00:26:17.501092 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.503223 kubelet[3190]: W1108 00:26:17.503012 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.503223 kubelet[3190]: E1108 00:26:17.503054 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.505403 kubelet[3190]: E1108 00:26:17.504909 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.505403 kubelet[3190]: W1108 00:26:17.504930 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.505403 kubelet[3190]: E1108 00:26:17.504960 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.507377 kubelet[3190]: E1108 00:26:17.507025 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.507377 kubelet[3190]: W1108 00:26:17.507044 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.507377 kubelet[3190]: E1108 00:26:17.507077 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.507949 kubelet[3190]: E1108 00:26:17.507737 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.507949 kubelet[3190]: W1108 00:26:17.507757 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.507949 kubelet[3190]: E1108 00:26:17.507784 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.509746 kubelet[3190]: E1108 00:26:17.508513 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.509746 kubelet[3190]: W1108 00:26:17.508533 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.510089 kubelet[3190]: E1108 00:26:17.509922 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.510740 kubelet[3190]: E1108 00:26:17.510504 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.510740 kubelet[3190]: W1108 00:26:17.510520 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.510740 kubelet[3190]: E1108 00:26:17.510556 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.513299 kubelet[3190]: E1108 00:26:17.511890 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.513299 kubelet[3190]: W1108 00:26:17.511915 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.513299 kubelet[3190]: E1108 00:26:17.511931 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.517018 containerd[1986]: time="2025-11-08T00:26:17.515507897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:17.517018 containerd[1986]: time="2025-11-08T00:26:17.515824925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:17.517018 containerd[1986]: time="2025-11-08T00:26:17.515918240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:17.517018 containerd[1986]: time="2025-11-08T00:26:17.516313629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:17.528417 kubelet[3190]: E1108 00:26:17.528374 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.528607 kubelet[3190]: W1108 00:26:17.528590 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.528697 kubelet[3190]: E1108 00:26:17.528684 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.529482 kubelet[3190]: E1108 00:26:17.529411 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.529482 kubelet[3190]: W1108 00:26:17.529431 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.529482 kubelet[3190]: E1108 00:26:17.529449 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.601633 kubelet[3190]: E1108 00:26:17.601581 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hcwvd" podUID="543aa209-599c-4d8e-9da3-550061520690" Nov 8 00:26:17.608542 systemd[1]: Started cri-containerd-1f01cd440c7e36ccfbdbbfd2b87074c380c820e60517bbc90afa183c2780216c.scope - libcontainer container 1f01cd440c7e36ccfbdbbfd2b87074c380c820e60517bbc90afa183c2780216c. Nov 8 00:26:17.671524 kubelet[3190]: E1108 00:26:17.671135 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.671524 kubelet[3190]: W1108 00:26:17.671162 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.671524 kubelet[3190]: E1108 00:26:17.671205 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.671772 kubelet[3190]: E1108 00:26:17.671587 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.671772 kubelet[3190]: W1108 00:26:17.671600 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.671772 kubelet[3190]: E1108 00:26:17.671631 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.671906 kubelet[3190]: E1108 00:26:17.671889 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.671906 kubelet[3190]: W1108 00:26:17.671899 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.671999 kubelet[3190]: E1108 00:26:17.671912 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.674092 kubelet[3190]: E1108 00:26:17.672264 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.674092 kubelet[3190]: W1108 00:26:17.673355 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.674092 kubelet[3190]: E1108 00:26:17.673379 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.674092 kubelet[3190]: E1108 00:26:17.673709 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.674092 kubelet[3190]: W1108 00:26:17.673737 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.674092 kubelet[3190]: E1108 00:26:17.673750 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.674092 kubelet[3190]: E1108 00:26:17.673992 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.674092 kubelet[3190]: W1108 00:26:17.674001 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.674092 kubelet[3190]: E1108 00:26:17.674012 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.674678 kubelet[3190]: E1108 00:26:17.674254 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.674678 kubelet[3190]: W1108 00:26:17.674264 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.674678 kubelet[3190]: E1108 00:26:17.674279 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.674678 kubelet[3190]: E1108 00:26:17.674537 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.674678 kubelet[3190]: W1108 00:26:17.674547 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.674678 kubelet[3190]: E1108 00:26:17.674559 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.676426 kubelet[3190]: E1108 00:26:17.675890 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.676426 kubelet[3190]: W1108 00:26:17.675906 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.676426 kubelet[3190]: E1108 00:26:17.675924 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.676426 kubelet[3190]: E1108 00:26:17.676141 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.676426 kubelet[3190]: W1108 00:26:17.676150 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.676426 kubelet[3190]: E1108 00:26:17.676162 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.677967 containerd[1986]: time="2025-11-08T00:26:17.677925621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x5qf9,Uid:f9e26acf-a45a-45a5-bff5-6dd223b8495f,Namespace:calico-system,Attempt:0,}" Nov 8 00:26:17.679062 kubelet[3190]: E1108 00:26:17.679041 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.679062 kubelet[3190]: W1108 00:26:17.679060 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.679237 kubelet[3190]: E1108 00:26:17.679079 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.681427 kubelet[3190]: E1108 00:26:17.681406 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.681560 kubelet[3190]: W1108 00:26:17.681428 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.681560 kubelet[3190]: E1108 00:26:17.681447 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.681975 kubelet[3190]: E1108 00:26:17.681945 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.681975 kubelet[3190]: W1108 00:26:17.681964 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.682236 kubelet[3190]: E1108 00:26:17.681980 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.682364 kubelet[3190]: E1108 00:26:17.682349 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.682364 kubelet[3190]: W1108 00:26:17.682366 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.682535 kubelet[3190]: E1108 00:26:17.682410 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.682914 kubelet[3190]: E1108 00:26:17.682897 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.682993 kubelet[3190]: W1108 00:26:17.682914 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.683320 kubelet[3190]: E1108 00:26:17.682932 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.687046 kubelet[3190]: E1108 00:26:17.687010 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.687046 kubelet[3190]: W1108 00:26:17.687041 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.687214 kubelet[3190]: E1108 00:26:17.687063 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.688912 kubelet[3190]: E1108 00:26:17.688887 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.688912 kubelet[3190]: W1108 00:26:17.688909 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.689045 kubelet[3190]: E1108 00:26:17.688931 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.691543 kubelet[3190]: E1108 00:26:17.691517 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.691543 kubelet[3190]: W1108 00:26:17.691542 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.691682 kubelet[3190]: E1108 00:26:17.691572 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.692664 kubelet[3190]: E1108 00:26:17.692579 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.692741 kubelet[3190]: W1108 00:26:17.692682 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.692741 kubelet[3190]: E1108 00:26:17.692704 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.693060 kubelet[3190]: E1108 00:26:17.693044 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.693121 kubelet[3190]: W1108 00:26:17.693068 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.693121 kubelet[3190]: E1108 00:26:17.693084 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.695117 kubelet[3190]: E1108 00:26:17.695085 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.695221 kubelet[3190]: W1108 00:26:17.695128 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.695221 kubelet[3190]: E1108 00:26:17.695146 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.695508 kubelet[3190]: I1108 00:26:17.695484 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/543aa209-599c-4d8e-9da3-550061520690-registration-dir\") pod \"csi-node-driver-hcwvd\" (UID: \"543aa209-599c-4d8e-9da3-550061520690\") " pod="calico-system/csi-node-driver-hcwvd" Nov 8 00:26:17.695782 kubelet[3190]: E1108 00:26:17.695765 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.697458 kubelet[3190]: W1108 00:26:17.695783 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.697458 kubelet[3190]: E1108 00:26:17.697458 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.697726 kubelet[3190]: I1108 00:26:17.697507 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/543aa209-599c-4d8e-9da3-550061520690-varrun\") pod \"csi-node-driver-hcwvd\" (UID: \"543aa209-599c-4d8e-9da3-550061520690\") " pod="calico-system/csi-node-driver-hcwvd" Nov 8 00:26:17.701744 kubelet[3190]: E1108 00:26:17.698473 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.701842 kubelet[3190]: W1108 00:26:17.701759 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.701842 kubelet[3190]: E1108 00:26:17.701782 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.702069 kubelet[3190]: I1108 00:26:17.702046 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn77g\" (UniqueName: \"kubernetes.io/projected/543aa209-599c-4d8e-9da3-550061520690-kube-api-access-pn77g\") pod \"csi-node-driver-hcwvd\" (UID: \"543aa209-599c-4d8e-9da3-550061520690\") " pod="calico-system/csi-node-driver-hcwvd" Nov 8 00:26:17.704296 kubelet[3190]: E1108 00:26:17.704205 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.704296 kubelet[3190]: W1108 00:26:17.704247 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.704296 kubelet[3190]: E1108 00:26:17.704266 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.704453 kubelet[3190]: I1108 00:26:17.704340 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/543aa209-599c-4d8e-9da3-550061520690-kubelet-dir\") pod \"csi-node-driver-hcwvd\" (UID: \"543aa209-599c-4d8e-9da3-550061520690\") " pod="calico-system/csi-node-driver-hcwvd" Nov 8 00:26:17.706070 kubelet[3190]: E1108 00:26:17.705895 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.706070 kubelet[3190]: W1108 00:26:17.705916 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.706070 kubelet[3190]: E1108 00:26:17.705934 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.706236 kubelet[3190]: I1108 00:26:17.706105 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/543aa209-599c-4d8e-9da3-550061520690-socket-dir\") pod \"csi-node-driver-hcwvd\" (UID: \"543aa209-599c-4d8e-9da3-550061520690\") " pod="calico-system/csi-node-driver-hcwvd" Nov 8 00:26:17.706825 kubelet[3190]: E1108 00:26:17.706790 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.706945 kubelet[3190]: W1108 00:26:17.706930 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.707242 kubelet[3190]: E1108 00:26:17.707018 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.708404 kubelet[3190]: E1108 00:26:17.708374 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.708647 kubelet[3190]: W1108 00:26:17.708526 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.708647 kubelet[3190]: E1108 00:26:17.708572 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.711562 kubelet[3190]: E1108 00:26:17.711439 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.711562 kubelet[3190]: W1108 00:26:17.711458 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.711562 kubelet[3190]: E1108 00:26:17.711478 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.713384 kubelet[3190]: E1108 00:26:17.713361 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.713466 kubelet[3190]: W1108 00:26:17.713385 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.713466 kubelet[3190]: E1108 00:26:17.713405 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.714105 kubelet[3190]: E1108 00:26:17.714002 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.714105 kubelet[3190]: W1108 00:26:17.714018 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.714105 kubelet[3190]: E1108 00:26:17.714033 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.715958 kubelet[3190]: E1108 00:26:17.714314 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.715958 kubelet[3190]: W1108 00:26:17.714329 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.715958 kubelet[3190]: E1108 00:26:17.714345 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.716852 kubelet[3190]: E1108 00:26:17.716834 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.716923 kubelet[3190]: W1108 00:26:17.716872 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.716923 kubelet[3190]: E1108 00:26:17.716890 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.717211 kubelet[3190]: E1108 00:26:17.717196 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.717277 kubelet[3190]: W1108 00:26:17.717223 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.717277 kubelet[3190]: E1108 00:26:17.717239 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.717823 kubelet[3190]: E1108 00:26:17.717806 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.717889 kubelet[3190]: W1108 00:26:17.717823 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.717889 kubelet[3190]: E1108 00:26:17.717837 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.721201 kubelet[3190]: E1108 00:26:17.721091 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.721201 kubelet[3190]: W1108 00:26:17.721113 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.721201 kubelet[3190]: E1108 00:26:17.721131 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.758339 containerd[1986]: time="2025-11-08T00:26:17.758226861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cb5897868-z6kds,Uid:bf5acb40-54cf-41b3-8ff0-4acda311e3cd,Namespace:calico-system,Attempt:0,} returns sandbox id \"1f01cd440c7e36ccfbdbbfd2b87074c380c820e60517bbc90afa183c2780216c\"" Nov 8 00:26:17.764259 containerd[1986]: time="2025-11-08T00:26:17.764220894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:26:17.777617 containerd[1986]: time="2025-11-08T00:26:17.777339129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:17.777617 containerd[1986]: time="2025-11-08T00:26:17.777434770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:17.777617 containerd[1986]: time="2025-11-08T00:26:17.777451542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:17.778178 containerd[1986]: time="2025-11-08T00:26:17.777981719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:17.808316 kubelet[3190]: E1108 00:26:17.807764 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.808316 kubelet[3190]: W1108 00:26:17.807794 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.808316 kubelet[3190]: E1108 00:26:17.807817 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.808316 kubelet[3190]: E1108 00:26:17.808113 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.808316 kubelet[3190]: W1108 00:26:17.808125 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.808316 kubelet[3190]: E1108 00:26:17.808140 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.808932 kubelet[3190]: E1108 00:26:17.808916 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.809547 kubelet[3190]: W1108 00:26:17.808935 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.809547 kubelet[3190]: E1108 00:26:17.808953 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.812418 kubelet[3190]: E1108 00:26:17.812388 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.812418 kubelet[3190]: W1108 00:26:17.812411 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.812552 kubelet[3190]: E1108 00:26:17.812433 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.812819 kubelet[3190]: E1108 00:26:17.812790 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.812819 kubelet[3190]: W1108 00:26:17.812809 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.813204 kubelet[3190]: E1108 00:26:17.812825 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.813465 kubelet[3190]: E1108 00:26:17.813375 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.813465 kubelet[3190]: W1108 00:26:17.813390 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.813465 kubelet[3190]: E1108 00:26:17.813404 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.813852 kubelet[3190]: E1108 00:26:17.813818 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.813852 kubelet[3190]: W1108 00:26:17.813836 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.813852 kubelet[3190]: E1108 00:26:17.813851 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.814346 kubelet[3190]: E1108 00:26:17.814130 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.814346 kubelet[3190]: W1108 00:26:17.814141 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.814346 kubelet[3190]: E1108 00:26:17.814155 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.814779 kubelet[3190]: E1108 00:26:17.814429 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.814779 kubelet[3190]: W1108 00:26:17.814439 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.814779 kubelet[3190]: E1108 00:26:17.814452 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.814779 kubelet[3190]: E1108 00:26:17.814734 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.814779 kubelet[3190]: W1108 00:26:17.814745 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.814779 kubelet[3190]: E1108 00:26:17.814758 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.815617 kubelet[3190]: E1108 00:26:17.815040 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.815617 kubelet[3190]: W1108 00:26:17.815051 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.815617 kubelet[3190]: E1108 00:26:17.815065 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.815617 kubelet[3190]: E1108 00:26:17.815371 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.815617 kubelet[3190]: W1108 00:26:17.815383 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.815617 kubelet[3190]: E1108 00:26:17.815396 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.817373 kubelet[3190]: E1108 00:26:17.815649 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.817373 kubelet[3190]: W1108 00:26:17.815660 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.817373 kubelet[3190]: E1108 00:26:17.815673 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.817373 kubelet[3190]: E1108 00:26:17.815919 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.817373 kubelet[3190]: W1108 00:26:17.815929 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.817373 kubelet[3190]: E1108 00:26:17.815940 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.817373 kubelet[3190]: E1108 00:26:17.816223 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.817373 kubelet[3190]: W1108 00:26:17.816235 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.817373 kubelet[3190]: E1108 00:26:17.816247 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.817876 kubelet[3190]: E1108 00:26:17.817612 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.817876 kubelet[3190]: W1108 00:26:17.817625 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.817876 kubelet[3190]: E1108 00:26:17.817639 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.818015 kubelet[3190]: E1108 00:26:17.817896 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.818015 kubelet[3190]: W1108 00:26:17.817907 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.818015 kubelet[3190]: E1108 00:26:17.817919 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.818917 kubelet[3190]: E1108 00:26:17.818110 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.818917 kubelet[3190]: W1108 00:26:17.818120 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.818917 kubelet[3190]: E1108 00:26:17.818131 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.818917 kubelet[3190]: E1108 00:26:17.818499 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.818917 kubelet[3190]: W1108 00:26:17.818510 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.818917 kubelet[3190]: E1108 00:26:17.818523 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.820791 kubelet[3190]: E1108 00:26:17.819482 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.820791 kubelet[3190]: W1108 00:26:17.819497 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.820791 kubelet[3190]: E1108 00:26:17.819510 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.820791 kubelet[3190]: E1108 00:26:17.820227 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.820791 kubelet[3190]: W1108 00:26:17.820239 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.820791 kubelet[3190]: E1108 00:26:17.820252 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.820791 kubelet[3190]: E1108 00:26:17.820698 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.820791 kubelet[3190]: W1108 00:26:17.820709 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.820791 kubelet[3190]: E1108 00:26:17.820721 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.823197 kubelet[3190]: E1108 00:26:17.821044 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.823197 kubelet[3190]: W1108 00:26:17.821054 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.823197 kubelet[3190]: E1108 00:26:17.821066 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.823197 kubelet[3190]: E1108 00:26:17.821506 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.823197 kubelet[3190]: W1108 00:26:17.821517 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.823197 kubelet[3190]: E1108 00:26:17.821530 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.823197 kubelet[3190]: E1108 00:26:17.822523 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.823197 kubelet[3190]: W1108 00:26:17.822535 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.823197 kubelet[3190]: E1108 00:26:17.822549 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:17.840640 systemd[1]: Started cri-containerd-126d493cf6968dd305472423c7b35fd9ec83f073375f870eb0c17e6068d13242.scope - libcontainer container 126d493cf6968dd305472423c7b35fd9ec83f073375f870eb0c17e6068d13242. Nov 8 00:26:17.898317 kubelet[3190]: E1108 00:26:17.898265 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:17.898686 kubelet[3190]: W1108 00:26:17.898600 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:17.898686 kubelet[3190]: E1108 00:26:17.898631 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:18.001610 containerd[1986]: time="2025-11-08T00:26:17.999976936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x5qf9,Uid:f9e26acf-a45a-45a5-bff5-6dd223b8495f,Namespace:calico-system,Attempt:0,} returns sandbox id \"126d493cf6968dd305472423c7b35fd9ec83f073375f870eb0c17e6068d13242\"" Nov 8 00:26:19.241433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3273572717.mount: Deactivated successfully. Nov 8 00:26:19.683859 kubelet[3190]: E1108 00:26:19.683812 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hcwvd" podUID="543aa209-599c-4d8e-9da3-550061520690" Nov 8 00:26:20.367677 containerd[1986]: time="2025-11-08T00:26:20.367625347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:20.368770 containerd[1986]: time="2025-11-08T00:26:20.368621488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 8 00:26:20.370711 containerd[1986]: time="2025-11-08T00:26:20.369841786Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:20.372376 containerd[1986]: time="2025-11-08T00:26:20.372343527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:20.373018 containerd[1986]: time="2025-11-08T00:26:20.372980609Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.608712058s" Nov 8 00:26:20.373100 containerd[1986]: time="2025-11-08T00:26:20.373025109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:26:20.376872 containerd[1986]: time="2025-11-08T00:26:20.376836151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:26:20.398302 containerd[1986]: time="2025-11-08T00:26:20.398259176Z" level=info msg="CreateContainer within sandbox \"1f01cd440c7e36ccfbdbbfd2b87074c380c820e60517bbc90afa183c2780216c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:26:20.424088 containerd[1986]: time="2025-11-08T00:26:20.423980196Z" level=info msg="CreateContainer within sandbox \"1f01cd440c7e36ccfbdbbfd2b87074c380c820e60517bbc90afa183c2780216c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bf2e9145b8f820d3b725b7e2f99ec7de87023772d83ae88b52bd8c917afb9848\"" Nov 8 00:26:20.424740 containerd[1986]: time="2025-11-08T00:26:20.424712143Z" level=info msg="StartContainer for \"bf2e9145b8f820d3b725b7e2f99ec7de87023772d83ae88b52bd8c917afb9848\"" Nov 8 00:26:20.481533 systemd[1]: Started cri-containerd-bf2e9145b8f820d3b725b7e2f99ec7de87023772d83ae88b52bd8c917afb9848.scope - libcontainer container bf2e9145b8f820d3b725b7e2f99ec7de87023772d83ae88b52bd8c917afb9848. Nov 8 00:26:20.533657 containerd[1986]: time="2025-11-08T00:26:20.533607409Z" level=info msg="StartContainer for \"bf2e9145b8f820d3b725b7e2f99ec7de87023772d83ae88b52bd8c917afb9848\" returns successfully" Nov 8 00:26:21.018618 kubelet[3190]: E1108 00:26:21.018577 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.018618 kubelet[3190]: W1108 00:26:21.018606 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.018618 kubelet[3190]: E1108 00:26:21.018626 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.019173 kubelet[3190]: E1108 00:26:21.018959 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.019173 kubelet[3190]: W1108 00:26:21.018969 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.019173 kubelet[3190]: E1108 00:26:21.018980 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.019266 kubelet[3190]: E1108 00:26:21.019195 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.019266 kubelet[3190]: W1108 00:26:21.019202 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.019266 kubelet[3190]: E1108 00:26:21.019211 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.019465 kubelet[3190]: E1108 00:26:21.019436 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.019465 kubelet[3190]: W1108 00:26:21.019450 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.019465 kubelet[3190]: E1108 00:26:21.019460 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.019947 kubelet[3190]: E1108 00:26:21.019910 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.019947 kubelet[3190]: W1108 00:26:21.019926 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.020124 kubelet[3190]: E1108 00:26:21.019938 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.020363 kubelet[3190]: E1108 00:26:21.020343 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.020363 kubelet[3190]: W1108 00:26:21.020360 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.020522 kubelet[3190]: E1108 00:26:21.020378 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.020641 kubelet[3190]: E1108 00:26:21.020621 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.020641 kubelet[3190]: W1108 00:26:21.020638 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.020795 kubelet[3190]: E1108 00:26:21.020651 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.020907 kubelet[3190]: E1108 00:26:21.020878 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.020907 kubelet[3190]: W1108 00:26:21.020891 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.020907 kubelet[3190]: E1108 00:26:21.020904 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.021163 kubelet[3190]: E1108 00:26:21.021132 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.021163 kubelet[3190]: W1108 00:26:21.021143 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.021163 kubelet[3190]: E1108 00:26:21.021155 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.021402 kubelet[3190]: E1108 00:26:21.021395 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.021475 kubelet[3190]: W1108 00:26:21.021406 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.021475 kubelet[3190]: E1108 00:26:21.021419 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.021639 kubelet[3190]: E1108 00:26:21.021626 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.021698 kubelet[3190]: W1108 00:26:21.021640 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.021698 kubelet[3190]: E1108 00:26:21.021652 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.021877 kubelet[3190]: E1108 00:26:21.021860 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.021877 kubelet[3190]: W1108 00:26:21.021874 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.022028 kubelet[3190]: E1108 00:26:21.021886 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.022221 kubelet[3190]: E1108 00:26:21.022204 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.022221 kubelet[3190]: W1108 00:26:21.022217 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.022364 kubelet[3190]: E1108 00:26:21.022233 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.022624 kubelet[3190]: E1108 00:26:21.022510 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.022624 kubelet[3190]: W1108 00:26:21.022525 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.022624 kubelet[3190]: E1108 00:26:21.022536 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.022912 kubelet[3190]: E1108 00:26:21.022837 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.022912 kubelet[3190]: W1108 00:26:21.022850 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.022912 kubelet[3190]: E1108 00:26:21.022861 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.034297 kubelet[3190]: E1108 00:26:21.034254 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.034297 kubelet[3190]: W1108 00:26:21.034279 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.034448 kubelet[3190]: E1108 00:26:21.034312 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.034639 kubelet[3190]: E1108 00:26:21.034615 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.034639 kubelet[3190]: W1108 00:26:21.034632 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.034749 kubelet[3190]: E1108 00:26:21.034647 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.034911 kubelet[3190]: E1108 00:26:21.034893 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.034911 kubelet[3190]: W1108 00:26:21.034906 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.034992 kubelet[3190]: E1108 00:26:21.034918 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.035323 kubelet[3190]: E1108 00:26:21.035302 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.035323 kubelet[3190]: W1108 00:26:21.035316 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.035409 kubelet[3190]: E1108 00:26:21.035329 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.035562 kubelet[3190]: E1108 00:26:21.035540 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.035562 kubelet[3190]: W1108 00:26:21.035554 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.035637 kubelet[3190]: E1108 00:26:21.035565 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.035785 kubelet[3190]: E1108 00:26:21.035769 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.035785 kubelet[3190]: W1108 00:26:21.035781 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.035860 kubelet[3190]: E1108 00:26:21.035791 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.036120 kubelet[3190]: E1108 00:26:21.036004 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.036120 kubelet[3190]: W1108 00:26:21.036014 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.036120 kubelet[3190]: E1108 00:26:21.036026 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.036269 kubelet[3190]: E1108 00:26:21.036253 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.036269 kubelet[3190]: W1108 00:26:21.036263 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.036269 kubelet[3190]: E1108 00:26:21.036272 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.036492 kubelet[3190]: E1108 00:26:21.036479 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.036492 kubelet[3190]: W1108 00:26:21.036486 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.036550 kubelet[3190]: E1108 00:26:21.036494 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.036934 kubelet[3190]: E1108 00:26:21.036836 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.036934 kubelet[3190]: W1108 00:26:21.036849 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.036934 kubelet[3190]: E1108 00:26:21.036862 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.037077 kubelet[3190]: E1108 00:26:21.037068 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.037132 kubelet[3190]: W1108 00:26:21.037113 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.037132 kubelet[3190]: E1108 00:26:21.037129 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.037407 kubelet[3190]: E1108 00:26:21.037389 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.037407 kubelet[3190]: W1108 00:26:21.037402 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.037513 kubelet[3190]: E1108 00:26:21.037412 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.037849 kubelet[3190]: E1108 00:26:21.037832 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.037849 kubelet[3190]: W1108 00:26:21.037843 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.037849 kubelet[3190]: E1108 00:26:21.037851 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.038072 kubelet[3190]: E1108 00:26:21.038053 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.038072 kubelet[3190]: W1108 00:26:21.038067 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.038155 kubelet[3190]: E1108 00:26:21.038078 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.038352 kubelet[3190]: E1108 00:26:21.038338 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.038352 kubelet[3190]: W1108 00:26:21.038349 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.038416 kubelet[3190]: E1108 00:26:21.038359 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.038657 kubelet[3190]: E1108 00:26:21.038542 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.038657 kubelet[3190]: W1108 00:26:21.038555 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.038657 kubelet[3190]: E1108 00:26:21.038565 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.038813 kubelet[3190]: E1108 00:26:21.038796 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.038813 kubelet[3190]: W1108 00:26:21.038807 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.038813 kubelet[3190]: E1108 00:26:21.038816 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.039245 kubelet[3190]: E1108 00:26:21.039224 3190 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:21.039245 kubelet[3190]: W1108 00:26:21.039242 3190 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:21.039326 kubelet[3190]: E1108 00:26:21.039255 3190 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:21.684355 kubelet[3190]: E1108 00:26:21.684276 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hcwvd" podUID="543aa209-599c-4d8e-9da3-550061520690" Nov 8 00:26:21.737411 containerd[1986]: time="2025-11-08T00:26:21.737278664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:21.738693 containerd[1986]: time="2025-11-08T00:26:21.738651325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 8 00:26:21.739839 containerd[1986]: time="2025-11-08T00:26:21.739455938Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:21.741790 containerd[1986]: time="2025-11-08T00:26:21.741694406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:21.742497 containerd[1986]: time="2025-11-08T00:26:21.742465241Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.365584245s" Nov 8 00:26:21.742589 containerd[1986]: time="2025-11-08T00:26:21.742575724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:26:21.747787 containerd[1986]: time="2025-11-08T00:26:21.747729471Z" level=info msg="CreateContainer within sandbox \"126d493cf6968dd305472423c7b35fd9ec83f073375f870eb0c17e6068d13242\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:26:21.767542 containerd[1986]: time="2025-11-08T00:26:21.767491674Z" level=info msg="CreateContainer within sandbox \"126d493cf6968dd305472423c7b35fd9ec83f073375f870eb0c17e6068d13242\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8b74a2b440ed63f0919c5459bff300a7671ca1acfee520e46ab5d6a037b8278b\"" Nov 8 00:26:21.769375 containerd[1986]: time="2025-11-08T00:26:21.768613950Z" level=info msg="StartContainer for \"8b74a2b440ed63f0919c5459bff300a7671ca1acfee520e46ab5d6a037b8278b\"" Nov 8 00:26:21.835136 systemd[1]: Started cri-containerd-8b74a2b440ed63f0919c5459bff300a7671ca1acfee520e46ab5d6a037b8278b.scope - libcontainer container 8b74a2b440ed63f0919c5459bff300a7671ca1acfee520e46ab5d6a037b8278b. Nov 8 00:26:21.961390 containerd[1986]: time="2025-11-08T00:26:21.961239694Z" level=info msg="StartContainer for \"8b74a2b440ed63f0919c5459bff300a7671ca1acfee520e46ab5d6a037b8278b\" returns successfully" Nov 8 00:26:21.976552 systemd[1]: cri-containerd-8b74a2b440ed63f0919c5459bff300a7671ca1acfee520e46ab5d6a037b8278b.scope: Deactivated successfully. Nov 8 00:26:22.008844 kubelet[3190]: I1108 00:26:22.008817 3190 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:26:22.027716 kubelet[3190]: I1108 00:26:22.027652 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-cb5897868-z6kds" podStartSLOduration=2.417043032 podStartE2EDuration="5.027635421s" podCreationTimestamp="2025-11-08 00:26:17 +0000 UTC" firstStartedPulling="2025-11-08 00:26:17.763632222 +0000 UTC m=+24.246526591" lastFinishedPulling="2025-11-08 00:26:20.374224592 +0000 UTC m=+26.857118980" observedRunningTime="2025-11-08 00:26:21.016994022 +0000 UTC m=+27.499888413" watchObservedRunningTime="2025-11-08 00:26:22.027635421 +0000 UTC m=+28.510529861" Nov 8 00:26:22.139437 containerd[1986]: time="2025-11-08T00:26:22.101389063Z" level=info msg="shim disconnected" id=8b74a2b440ed63f0919c5459bff300a7671ca1acfee520e46ab5d6a037b8278b namespace=k8s.io Nov 8 00:26:22.139437 containerd[1986]: time="2025-11-08T00:26:22.139429652Z" level=warning msg="cleaning up after shim disconnected" id=8b74a2b440ed63f0919c5459bff300a7671ca1acfee520e46ab5d6a037b8278b namespace=k8s.io Nov 8 00:26:22.139798 containerd[1986]: time="2025-11-08T00:26:22.139450562Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:26:22.382959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b74a2b440ed63f0919c5459bff300a7671ca1acfee520e46ab5d6a037b8278b-rootfs.mount: Deactivated successfully. Nov 8 00:26:23.014360 containerd[1986]: time="2025-11-08T00:26:23.014100707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:26:23.685704 kubelet[3190]: E1108 00:26:23.684755 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hcwvd" podUID="543aa209-599c-4d8e-9da3-550061520690" Nov 8 00:26:25.684108 kubelet[3190]: E1108 00:26:25.684054 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hcwvd" podUID="543aa209-599c-4d8e-9da3-550061520690" Nov 8 00:26:27.394014 containerd[1986]: time="2025-11-08T00:26:27.393967010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:27.395759 containerd[1986]: time="2025-11-08T00:26:27.395699842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:26:27.398099 containerd[1986]: time="2025-11-08T00:26:27.397832469Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:27.405878 containerd[1986]: time="2025-11-08T00:26:27.405835744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:27.406522 containerd[1986]: time="2025-11-08T00:26:27.406489640Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.392347973s" Nov 8 00:26:27.406603 containerd[1986]: time="2025-11-08T00:26:27.406525167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:26:27.413913 containerd[1986]: time="2025-11-08T00:26:27.413869363Z" level=info msg="CreateContainer within sandbox \"126d493cf6968dd305472423c7b35fd9ec83f073375f870eb0c17e6068d13242\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:26:27.442247 containerd[1986]: time="2025-11-08T00:26:27.442191657Z" level=info msg="CreateContainer within sandbox \"126d493cf6968dd305472423c7b35fd9ec83f073375f870eb0c17e6068d13242\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"431c307dbc2159f8bd7805797907b83c638509d0c368612585e5af5d756d5f8b\"" Nov 8 00:26:27.442954 containerd[1986]: time="2025-11-08T00:26:27.442845762Z" level=info msg="StartContainer for \"431c307dbc2159f8bd7805797907b83c638509d0c368612585e5af5d756d5f8b\"" Nov 8 00:26:27.480970 systemd[1]: run-containerd-runc-k8s.io-431c307dbc2159f8bd7805797907b83c638509d0c368612585e5af5d756d5f8b-runc.xEYna1.mount: Deactivated successfully. Nov 8 00:26:27.493525 systemd[1]: Started cri-containerd-431c307dbc2159f8bd7805797907b83c638509d0c368612585e5af5d756d5f8b.scope - libcontainer container 431c307dbc2159f8bd7805797907b83c638509d0c368612585e5af5d756d5f8b. Nov 8 00:26:27.530656 containerd[1986]: time="2025-11-08T00:26:27.530595205Z" level=info msg="StartContainer for \"431c307dbc2159f8bd7805797907b83c638509d0c368612585e5af5d756d5f8b\" returns successfully" Nov 8 00:26:27.686436 kubelet[3190]: E1108 00:26:27.686313 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hcwvd" podUID="543aa209-599c-4d8e-9da3-550061520690" Nov 8 00:26:28.411911 systemd[1]: cri-containerd-431c307dbc2159f8bd7805797907b83c638509d0c368612585e5af5d756d5f8b.scope: Deactivated successfully. Nov 8 00:26:28.455259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-431c307dbc2159f8bd7805797907b83c638509d0c368612585e5af5d756d5f8b-rootfs.mount: Deactivated successfully. Nov 8 00:26:28.528024 kubelet[3190]: I1108 00:26:28.527991 3190 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 8 00:26:28.578781 containerd[1986]: time="2025-11-08T00:26:28.578678014Z" level=info msg="shim disconnected" id=431c307dbc2159f8bd7805797907b83c638509d0c368612585e5af5d756d5f8b namespace=k8s.io Nov 8 00:26:28.578781 containerd[1986]: time="2025-11-08T00:26:28.578754191Z" level=warning msg="cleaning up after shim disconnected" id=431c307dbc2159f8bd7805797907b83c638509d0c368612585e5af5d756d5f8b namespace=k8s.io Nov 8 00:26:28.578781 containerd[1986]: time="2025-11-08T00:26:28.578769234Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:26:28.628398 systemd[1]: Created slice kubepods-burstable-pod9ea252f2_da76_4fe0_acdf_c4fc18ba31ac.slice - libcontainer container kubepods-burstable-pod9ea252f2_da76_4fe0_acdf_c4fc18ba31ac.slice. Nov 8 00:26:28.644690 systemd[1]: Created slice kubepods-burstable-pod75831af6_29ac_43d1_829f_acf86112d6f8.slice - libcontainer container kubepods-burstable-pod75831af6_29ac_43d1_829f_acf86112d6f8.slice. Nov 8 00:26:28.660488 systemd[1]: Created slice kubepods-besteffort-pod2854d816_9155_4f6f_a8ba_78872a67ac8c.slice - libcontainer container kubepods-besteffort-pod2854d816_9155_4f6f_a8ba_78872a67ac8c.slice. Nov 8 00:26:28.678469 systemd[1]: Created slice kubepods-besteffort-pod25c0194f_ade5_4e44_84ac_d9a48225182d.slice - libcontainer container kubepods-besteffort-pod25c0194f_ade5_4e44_84ac_d9a48225182d.slice. Nov 8 00:26:28.692101 systemd[1]: Created slice kubepods-besteffort-pod59621f83_2f27_42e2_8c18_c119c79f6847.slice - libcontainer container kubepods-besteffort-pod59621f83_2f27_42e2_8c18_c119c79f6847.slice. Nov 8 00:26:28.693706 kubelet[3190]: I1108 00:26:28.693670 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ea252f2-da76-4fe0-acdf-c4fc18ba31ac-config-volume\") pod \"coredns-66bc5c9577-phbxx\" (UID: \"9ea252f2-da76-4fe0-acdf-c4fc18ba31ac\") " pod="kube-system/coredns-66bc5c9577-phbxx" Nov 8 00:26:28.694146 kubelet[3190]: I1108 00:26:28.693718 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlstj\" (UniqueName: \"kubernetes.io/projected/9ea252f2-da76-4fe0-acdf-c4fc18ba31ac-kube-api-access-tlstj\") pod \"coredns-66bc5c9577-phbxx\" (UID: \"9ea252f2-da76-4fe0-acdf-c4fc18ba31ac\") " pod="kube-system/coredns-66bc5c9577-phbxx" Nov 8 00:26:28.694146 kubelet[3190]: I1108 00:26:28.693745 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/36acaf38-ef21-4c55-a6b7-ba0516894e6c-calico-apiserver-certs\") pod \"calico-apiserver-5d84f7c9c6-r5rdl\" (UID: \"36acaf38-ef21-4c55-a6b7-ba0516894e6c\") " pod="calico-apiserver/calico-apiserver-5d84f7c9c6-r5rdl" Nov 8 00:26:28.694146 kubelet[3190]: I1108 00:26:28.693770 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzpqz\" (UniqueName: \"kubernetes.io/projected/2854d816-9155-4f6f-a8ba-78872a67ac8c-kube-api-access-xzpqz\") pod \"calico-kube-controllers-756f78cd95-ppxpv\" (UID: \"2854d816-9155-4f6f-a8ba-78872a67ac8c\") " pod="calico-system/calico-kube-controllers-756f78cd95-ppxpv" Nov 8 00:26:28.694146 kubelet[3190]: I1108 00:26:28.693801 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25c0194f-ade5-4e44-84ac-d9a48225182d-whisker-ca-bundle\") pod \"whisker-5f5fdfdfd5-8qkgd\" (UID: \"25c0194f-ade5-4e44-84ac-d9a48225182d\") " pod="calico-system/whisker-5f5fdfdfd5-8qkgd" Nov 8 00:26:28.694146 kubelet[3190]: I1108 00:26:28.693833 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvzh6\" (UniqueName: \"kubernetes.io/projected/7517a6de-bfae-458e-a17f-83662a231d90-kube-api-access-fvzh6\") pod \"calico-apiserver-5d84f7c9c6-th4rp\" (UID: \"7517a6de-bfae-458e-a17f-83662a231d90\") " pod="calico-apiserver/calico-apiserver-5d84f7c9c6-th4rp" Nov 8 00:26:28.694432 kubelet[3190]: I1108 00:26:28.693857 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnpcd\" (UniqueName: \"kubernetes.io/projected/75831af6-29ac-43d1-829f-acf86112d6f8-kube-api-access-gnpcd\") pod \"coredns-66bc5c9577-vhxwd\" (UID: \"75831af6-29ac-43d1-829f-acf86112d6f8\") " pod="kube-system/coredns-66bc5c9577-vhxwd" Nov 8 00:26:28.694432 kubelet[3190]: I1108 00:26:28.693881 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59621f83-2f27-42e2-8c18-c119c79f6847-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-qb5jn\" (UID: \"59621f83-2f27-42e2-8c18-c119c79f6847\") " pod="calico-system/goldmane-7c778bb748-qb5jn" Nov 8 00:26:28.694432 kubelet[3190]: I1108 00:26:28.693906 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75831af6-29ac-43d1-829f-acf86112d6f8-config-volume\") pod \"coredns-66bc5c9577-vhxwd\" (UID: \"75831af6-29ac-43d1-829f-acf86112d6f8\") " pod="kube-system/coredns-66bc5c9577-vhxwd" Nov 8 00:26:28.694432 kubelet[3190]: I1108 00:26:28.693933 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/59621f83-2f27-42e2-8c18-c119c79f6847-goldmane-key-pair\") pod \"goldmane-7c778bb748-qb5jn\" (UID: \"59621f83-2f27-42e2-8c18-c119c79f6847\") " pod="calico-system/goldmane-7c778bb748-qb5jn" Nov 8 00:26:28.694432 kubelet[3190]: I1108 00:26:28.693965 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7517a6de-bfae-458e-a17f-83662a231d90-calico-apiserver-certs\") pod \"calico-apiserver-5d84f7c9c6-th4rp\" (UID: \"7517a6de-bfae-458e-a17f-83662a231d90\") " pod="calico-apiserver/calico-apiserver-5d84f7c9c6-th4rp" Nov 8 00:26:28.694660 kubelet[3190]: I1108 00:26:28.693989 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59621f83-2f27-42e2-8c18-c119c79f6847-config\") pod \"goldmane-7c778bb748-qb5jn\" (UID: \"59621f83-2f27-42e2-8c18-c119c79f6847\") " pod="calico-system/goldmane-7c778bb748-qb5jn" Nov 8 00:26:28.694660 kubelet[3190]: I1108 00:26:28.694015 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjvlm\" (UniqueName: \"kubernetes.io/projected/25c0194f-ade5-4e44-84ac-d9a48225182d-kube-api-access-tjvlm\") pod \"whisker-5f5fdfdfd5-8qkgd\" (UID: \"25c0194f-ade5-4e44-84ac-d9a48225182d\") " pod="calico-system/whisker-5f5fdfdfd5-8qkgd" Nov 8 00:26:28.694660 kubelet[3190]: I1108 00:26:28.694035 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2854d816-9155-4f6f-a8ba-78872a67ac8c-tigera-ca-bundle\") pod \"calico-kube-controllers-756f78cd95-ppxpv\" (UID: \"2854d816-9155-4f6f-a8ba-78872a67ac8c\") " pod="calico-system/calico-kube-controllers-756f78cd95-ppxpv" Nov 8 00:26:28.694660 kubelet[3190]: I1108 00:26:28.694058 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqdd9\" (UniqueName: \"kubernetes.io/projected/59621f83-2f27-42e2-8c18-c119c79f6847-kube-api-access-wqdd9\") pod \"goldmane-7c778bb748-qb5jn\" (UID: \"59621f83-2f27-42e2-8c18-c119c79f6847\") " pod="calico-system/goldmane-7c778bb748-qb5jn" Nov 8 00:26:28.694660 kubelet[3190]: I1108 00:26:28.694082 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2q6n\" (UniqueName: \"kubernetes.io/projected/36acaf38-ef21-4c55-a6b7-ba0516894e6c-kube-api-access-j2q6n\") pod \"calico-apiserver-5d84f7c9c6-r5rdl\" (UID: \"36acaf38-ef21-4c55-a6b7-ba0516894e6c\") " pod="calico-apiserver/calico-apiserver-5d84f7c9c6-r5rdl" Nov 8 00:26:28.694864 kubelet[3190]: I1108 00:26:28.694113 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/25c0194f-ade5-4e44-84ac-d9a48225182d-whisker-backend-key-pair\") pod \"whisker-5f5fdfdfd5-8qkgd\" (UID: \"25c0194f-ade5-4e44-84ac-d9a48225182d\") " pod="calico-system/whisker-5f5fdfdfd5-8qkgd" Nov 8 00:26:28.705903 systemd[1]: Created slice kubepods-besteffort-pod7517a6de_bfae_458e_a17f_83662a231d90.slice - libcontainer container kubepods-besteffort-pod7517a6de_bfae_458e_a17f_83662a231d90.slice. Nov 8 00:26:28.718872 systemd[1]: Created slice kubepods-besteffort-pod36acaf38_ef21_4c55_a6b7_ba0516894e6c.slice - libcontainer container kubepods-besteffort-pod36acaf38_ef21_4c55_a6b7_ba0516894e6c.slice. Nov 8 00:26:28.941648 containerd[1986]: time="2025-11-08T00:26:28.941517980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-phbxx,Uid:9ea252f2-da76-4fe0-acdf-c4fc18ba31ac,Namespace:kube-system,Attempt:0,}" Nov 8 00:26:28.954268 containerd[1986]: time="2025-11-08T00:26:28.954201241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vhxwd,Uid:75831af6-29ac-43d1-829f-acf86112d6f8,Namespace:kube-system,Attempt:0,}" Nov 8 00:26:28.974680 containerd[1986]: time="2025-11-08T00:26:28.974633985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-756f78cd95-ppxpv,Uid:2854d816-9155-4f6f-a8ba-78872a67ac8c,Namespace:calico-system,Attempt:0,}" Nov 8 00:26:28.995580 containerd[1986]: time="2025-11-08T00:26:28.995096272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f5fdfdfd5-8qkgd,Uid:25c0194f-ade5-4e44-84ac-d9a48225182d,Namespace:calico-system,Attempt:0,}" Nov 8 00:26:29.016713 containerd[1986]: time="2025-11-08T00:26:29.016623408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d84f7c9c6-th4rp,Uid:7517a6de-bfae-458e-a17f-83662a231d90,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:26:29.017004 containerd[1986]: time="2025-11-08T00:26:29.016943571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-qb5jn,Uid:59621f83-2f27-42e2-8c18-c119c79f6847,Namespace:calico-system,Attempt:0,}" Nov 8 00:26:29.068736 containerd[1986]: time="2025-11-08T00:26:29.068349316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d84f7c9c6-r5rdl,Uid:36acaf38-ef21-4c55-a6b7-ba0516894e6c,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:26:29.089177 containerd[1986]: time="2025-11-08T00:26:29.089141335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:26:29.430304 containerd[1986]: time="2025-11-08T00:26:29.430230385Z" level=error msg="Failed to destroy network for sandbox \"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.436674 containerd[1986]: time="2025-11-08T00:26:29.436610926Z" level=error msg="encountered an error cleaning up failed sandbox \"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.436931 containerd[1986]: time="2025-11-08T00:26:29.436903476Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-phbxx,Uid:9ea252f2-da76-4fe0-acdf-c4fc18ba31ac,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.462499 kubelet[3190]: E1108 00:26:29.462447 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.462773 kubelet[3190]: E1108 00:26:29.462744 3190 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-phbxx" Nov 8 00:26:29.462900 kubelet[3190]: E1108 00:26:29.462880 3190 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-phbxx" Nov 8 00:26:29.463463 kubelet[3190]: E1108 00:26:29.463421 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-phbxx_kube-system(9ea252f2-da76-4fe0-acdf-c4fc18ba31ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-phbxx_kube-system(9ea252f2-da76-4fe0-acdf-c4fc18ba31ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-phbxx" podUID="9ea252f2-da76-4fe0-acdf-c4fc18ba31ac" Nov 8 00:26:29.515682 containerd[1986]: time="2025-11-08T00:26:29.515633509Z" level=error msg="Failed to destroy network for sandbox \"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.519987 containerd[1986]: time="2025-11-08T00:26:29.519934105Z" level=error msg="encountered an error cleaning up failed sandbox \"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.521448 containerd[1986]: time="2025-11-08T00:26:29.520186733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d84f7c9c6-r5rdl,Uid:36acaf38-ef21-4c55-a6b7-ba0516894e6c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.521122 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17-shm.mount: Deactivated successfully. Nov 8 00:26:29.522307 kubelet[3190]: E1108 00:26:29.522251 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.522415 kubelet[3190]: E1108 00:26:29.522352 3190 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-r5rdl" Nov 8 00:26:29.522711 kubelet[3190]: E1108 00:26:29.522477 3190 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-r5rdl" Nov 8 00:26:29.522711 kubelet[3190]: E1108 00:26:29.522561 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d84f7c9c6-r5rdl_calico-apiserver(36acaf38-ef21-4c55-a6b7-ba0516894e6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d84f7c9c6-r5rdl_calico-apiserver(36acaf38-ef21-4c55-a6b7-ba0516894e6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-r5rdl" podUID="36acaf38-ef21-4c55-a6b7-ba0516894e6c" Nov 8 00:26:29.564042 containerd[1986]: time="2025-11-08T00:26:29.563987682Z" level=error msg="Failed to destroy network for sandbox \"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.566398 containerd[1986]: time="2025-11-08T00:26:29.565485080Z" level=error msg="encountered an error cleaning up failed sandbox \"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.567457 containerd[1986]: time="2025-11-08T00:26:29.567399607Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vhxwd,Uid:75831af6-29ac-43d1-829f-acf86112d6f8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.568651 kubelet[3190]: E1108 00:26:29.568132 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.569768 kubelet[3190]: E1108 00:26:29.568790 3190 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-vhxwd" Nov 8 00:26:29.569768 kubelet[3190]: E1108 00:26:29.568839 3190 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-vhxwd" Nov 8 00:26:29.570789 kubelet[3190]: E1108 00:26:29.570416 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-vhxwd_kube-system(75831af6-29ac-43d1-829f-acf86112d6f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-vhxwd_kube-system(75831af6-29ac-43d1-829f-acf86112d6f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-vhxwd" podUID="75831af6-29ac-43d1-829f-acf86112d6f8" Nov 8 00:26:29.571120 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc-shm.mount: Deactivated successfully. Nov 8 00:26:29.584698 containerd[1986]: time="2025-11-08T00:26:29.584654164Z" level=error msg="Failed to destroy network for sandbox \"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.587312 containerd[1986]: time="2025-11-08T00:26:29.585464767Z" level=error msg="encountered an error cleaning up failed sandbox \"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.587521 containerd[1986]: time="2025-11-08T00:26:29.587483945Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-756f78cd95-ppxpv,Uid:2854d816-9155-4f6f-a8ba-78872a67ac8c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.590627 kubelet[3190]: E1108 00:26:29.587881 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.590627 kubelet[3190]: E1108 00:26:29.587948 3190 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-756f78cd95-ppxpv" Nov 8 00:26:29.590627 kubelet[3190]: E1108 00:26:29.587979 3190 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-756f78cd95-ppxpv" Nov 8 00:26:29.590245 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1-shm.mount: Deactivated successfully. Nov 8 00:26:29.590929 kubelet[3190]: E1108 00:26:29.588043 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-756f78cd95-ppxpv_calico-system(2854d816-9155-4f6f-a8ba-78872a67ac8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-756f78cd95-ppxpv_calico-system(2854d816-9155-4f6f-a8ba-78872a67ac8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-756f78cd95-ppxpv" podUID="2854d816-9155-4f6f-a8ba-78872a67ac8c" Nov 8 00:26:29.595689 containerd[1986]: time="2025-11-08T00:26:29.595643245Z" level=error msg="Failed to destroy network for sandbox \"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.598826 containerd[1986]: time="2025-11-08T00:26:29.598762254Z" level=error msg="encountered an error cleaning up failed sandbox \"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.599199 containerd[1986]: time="2025-11-08T00:26:29.599149772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f5fdfdfd5-8qkgd,Uid:25c0194f-ade5-4e44-84ac-d9a48225182d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.600033 containerd[1986]: time="2025-11-08T00:26:29.599077063Z" level=error msg="Failed to destroy network for sandbox \"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.601664 containerd[1986]: time="2025-11-08T00:26:29.600194146Z" level=error msg="encountered an error cleaning up failed sandbox \"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.601664 containerd[1986]: time="2025-11-08T00:26:29.600261068Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d84f7c9c6-th4rp,Uid:7517a6de-bfae-458e-a17f-83662a231d90,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.601852 kubelet[3190]: E1108 00:26:29.600315 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.601852 kubelet[3190]: E1108 00:26:29.600377 3190 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f5fdfdfd5-8qkgd" Nov 8 00:26:29.601852 kubelet[3190]: E1108 00:26:29.600401 3190 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f5fdfdfd5-8qkgd" Nov 8 00:26:29.602006 kubelet[3190]: E1108 00:26:29.600464 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f5fdfdfd5-8qkgd_calico-system(25c0194f-ade5-4e44-84ac-d9a48225182d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f5fdfdfd5-8qkgd_calico-system(25c0194f-ade5-4e44-84ac-d9a48225182d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f5fdfdfd5-8qkgd" podUID="25c0194f-ade5-4e44-84ac-d9a48225182d" Nov 8 00:26:29.602908 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707-shm.mount: Deactivated successfully. Nov 8 00:26:29.603770 kubelet[3190]: E1108 00:26:29.603014 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.603770 kubelet[3190]: E1108 00:26:29.603069 3190 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-th4rp" Nov 8 00:26:29.603770 kubelet[3190]: E1108 00:26:29.603093 3190 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-th4rp" Nov 8 00:26:29.603927 kubelet[3190]: E1108 00:26:29.603165 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d84f7c9c6-th4rp_calico-apiserver(7517a6de-bfae-458e-a17f-83662a231d90)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d84f7c9c6-th4rp_calico-apiserver(7517a6de-bfae-458e-a17f-83662a231d90)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-th4rp" podUID="7517a6de-bfae-458e-a17f-83662a231d90" Nov 8 00:26:29.614779 containerd[1986]: time="2025-11-08T00:26:29.614727240Z" level=error msg="Failed to destroy network for sandbox \"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.615186 containerd[1986]: time="2025-11-08T00:26:29.615123596Z" level=error msg="encountered an error cleaning up failed sandbox \"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.615427 containerd[1986]: time="2025-11-08T00:26:29.615213883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-qb5jn,Uid:59621f83-2f27-42e2-8c18-c119c79f6847,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.615763 kubelet[3190]: E1108 00:26:29.615516 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.615763 kubelet[3190]: E1108 00:26:29.615561 3190 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-qb5jn" Nov 8 00:26:29.615763 kubelet[3190]: E1108 00:26:29.615579 3190 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-qb5jn" Nov 8 00:26:29.615876 kubelet[3190]: E1108 00:26:29.615638 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-qb5jn_calico-system(59621f83-2f27-42e2-8c18-c119c79f6847)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-qb5jn_calico-system(59621f83-2f27-42e2-8c18-c119c79f6847)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-qb5jn" podUID="59621f83-2f27-42e2-8c18-c119c79f6847" Nov 8 00:26:29.690534 systemd[1]: Created slice kubepods-besteffort-pod543aa209_599c_4d8e_9da3_550061520690.slice - libcontainer container kubepods-besteffort-pod543aa209_599c_4d8e_9da3_550061520690.slice. Nov 8 00:26:29.696526 containerd[1986]: time="2025-11-08T00:26:29.696487870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hcwvd,Uid:543aa209-599c-4d8e-9da3-550061520690,Namespace:calico-system,Attempt:0,}" Nov 8 00:26:29.761082 containerd[1986]: time="2025-11-08T00:26:29.761018522Z" level=error msg="Failed to destroy network for sandbox \"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.761429 containerd[1986]: time="2025-11-08T00:26:29.761392469Z" level=error msg="encountered an error cleaning up failed sandbox \"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.761547 containerd[1986]: time="2025-11-08T00:26:29.761459309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hcwvd,Uid:543aa209-599c-4d8e-9da3-550061520690,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.761762 kubelet[3190]: E1108 00:26:29.761720 3190 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:29.762372 kubelet[3190]: E1108 00:26:29.761785 3190 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hcwvd" Nov 8 00:26:29.762372 kubelet[3190]: E1108 00:26:29.761818 3190 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hcwvd" Nov 8 00:26:29.762372 kubelet[3190]: E1108 00:26:29.761900 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hcwvd_calico-system(543aa209-599c-4d8e-9da3-550061520690)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hcwvd_calico-system(543aa209-599c-4d8e-9da3-550061520690)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hcwvd" podUID="543aa209-599c-4d8e-9da3-550061520690" Nov 8 00:26:30.089559 kubelet[3190]: I1108 00:26:30.089444 3190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" Nov 8 00:26:30.091458 kubelet[3190]: I1108 00:26:30.091394 3190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" Nov 8 00:26:30.095516 kubelet[3190]: I1108 00:26:30.094691 3190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" Nov 8 00:26:30.095632 containerd[1986]: time="2025-11-08T00:26:30.095510933Z" level=info msg="StopPodSandbox for \"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc\"" Nov 8 00:26:30.098364 containerd[1986]: time="2025-11-08T00:26:30.097983178Z" level=info msg="Ensure that sandbox 2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc in task-service has been cleanup successfully" Nov 8 00:26:30.099524 containerd[1986]: time="2025-11-08T00:26:30.099493834Z" level=info msg="StopPodSandbox for \"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17\"" Nov 8 00:26:30.100005 containerd[1986]: time="2025-11-08T00:26:30.099817312Z" level=info msg="Ensure that sandbox 208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17 in task-service has been cleanup successfully" Nov 8 00:26:30.100379 containerd[1986]: time="2025-11-08T00:26:30.099952807Z" level=info msg="StopPodSandbox for \"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224\"" Nov 8 00:26:30.100557 containerd[1986]: time="2025-11-08T00:26:30.100542261Z" level=info msg="Ensure that sandbox dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224 in task-service has been cleanup successfully" Nov 8 00:26:30.105930 kubelet[3190]: I1108 00:26:30.105904 3190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" Nov 8 00:26:30.109213 containerd[1986]: time="2025-11-08T00:26:30.109173163Z" level=info msg="StopPodSandbox for \"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c\"" Nov 8 00:26:30.109555 containerd[1986]: time="2025-11-08T00:26:30.109539460Z" level=info msg="Ensure that sandbox fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c in task-service has been cleanup successfully" Nov 8 00:26:30.113098 kubelet[3190]: I1108 00:26:30.113059 3190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" Nov 8 00:26:30.122267 containerd[1986]: time="2025-11-08T00:26:30.122227830Z" level=info msg="StopPodSandbox for \"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707\"" Nov 8 00:26:30.122511 containerd[1986]: time="2025-11-08T00:26:30.122438284Z" level=info msg="Ensure that sandbox a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707 in task-service has been cleanup successfully" Nov 8 00:26:30.130759 kubelet[3190]: I1108 00:26:30.130693 3190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" Nov 8 00:26:30.133433 containerd[1986]: time="2025-11-08T00:26:30.133271468Z" level=info msg="StopPodSandbox for \"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1\"" Nov 8 00:26:30.133857 containerd[1986]: time="2025-11-08T00:26:30.133458083Z" level=info msg="Ensure that sandbox 179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1 in task-service has been cleanup successfully" Nov 8 00:26:30.135380 kubelet[3190]: I1108 00:26:30.135358 3190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" Nov 8 00:26:30.144392 containerd[1986]: time="2025-11-08T00:26:30.144351387Z" level=info msg="StopPodSandbox for \"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff\"" Nov 8 00:26:30.144536 containerd[1986]: time="2025-11-08T00:26:30.144517752Z" level=info msg="Ensure that sandbox 6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff in task-service has been cleanup successfully" Nov 8 00:26:30.148120 containerd[1986]: time="2025-11-08T00:26:30.146910220Z" level=error msg="StopPodSandbox for \"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc\" failed" error="failed to destroy network for sandbox \"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:30.148249 kubelet[3190]: E1108 00:26:30.147245 3190 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" Nov 8 00:26:30.148249 kubelet[3190]: E1108 00:26:30.147309 3190 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc"} Nov 8 00:26:30.148249 kubelet[3190]: E1108 00:26:30.147362 3190 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"75831af6-29ac-43d1-829f-acf86112d6f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:30.148249 kubelet[3190]: E1108 00:26:30.147398 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"75831af6-29ac-43d1-829f-acf86112d6f8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-vhxwd" podUID="75831af6-29ac-43d1-829f-acf86112d6f8" Nov 8 00:26:30.149225 kubelet[3190]: I1108 00:26:30.149203 3190 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" Nov 8 00:26:30.151053 containerd[1986]: time="2025-11-08T00:26:30.150606245Z" level=info msg="StopPodSandbox for \"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa\"" Nov 8 00:26:30.151828 containerd[1986]: time="2025-11-08T00:26:30.151800262Z" level=info msg="Ensure that sandbox 3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa in task-service has been cleanup successfully" Nov 8 00:26:30.217199 containerd[1986]: time="2025-11-08T00:26:30.217155257Z" level=error msg="StopPodSandbox for \"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224\" failed" error="failed to destroy network for sandbox \"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:30.217597 kubelet[3190]: E1108 00:26:30.217559 3190 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" Nov 8 00:26:30.217696 kubelet[3190]: E1108 00:26:30.217607 3190 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224"} Nov 8 00:26:30.217696 kubelet[3190]: E1108 00:26:30.217636 3190 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7517a6de-bfae-458e-a17f-83662a231d90\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:30.217696 kubelet[3190]: E1108 00:26:30.217659 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7517a6de-bfae-458e-a17f-83662a231d90\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-th4rp" podUID="7517a6de-bfae-458e-a17f-83662a231d90" Nov 8 00:26:30.224315 containerd[1986]: time="2025-11-08T00:26:30.223884637Z" level=error msg="StopPodSandbox for \"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17\" failed" error="failed to destroy network for sandbox \"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:30.224469 kubelet[3190]: E1108 00:26:30.224084 3190 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" Nov 8 00:26:30.224469 kubelet[3190]: E1108 00:26:30.224119 3190 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17"} Nov 8 00:26:30.224469 kubelet[3190]: E1108 00:26:30.224148 3190 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36acaf38-ef21-4c55-a6b7-ba0516894e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:30.224469 kubelet[3190]: E1108 00:26:30.224173 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36acaf38-ef21-4c55-a6b7-ba0516894e6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-r5rdl" podUID="36acaf38-ef21-4c55-a6b7-ba0516894e6c" Nov 8 00:26:30.237219 containerd[1986]: time="2025-11-08T00:26:30.236963831Z" level=error msg="StopPodSandbox for \"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa\" failed" error="failed to destroy network for sandbox \"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:30.237942 kubelet[3190]: E1108 00:26:30.237279 3190 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" Nov 8 00:26:30.237942 kubelet[3190]: E1108 00:26:30.237348 3190 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa"} Nov 8 00:26:30.237942 kubelet[3190]: E1108 00:26:30.237380 3190 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"543aa209-599c-4d8e-9da3-550061520690\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:30.237942 kubelet[3190]: E1108 00:26:30.237406 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"543aa209-599c-4d8e-9da3-550061520690\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hcwvd" podUID="543aa209-599c-4d8e-9da3-550061520690" Nov 8 00:26:30.238243 containerd[1986]: time="2025-11-08T00:26:30.238009533Z" level=error msg="StopPodSandbox for \"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c\" failed" error="failed to destroy network for sandbox \"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:30.238322 kubelet[3190]: E1108 00:26:30.238259 3190 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" Nov 8 00:26:30.238322 kubelet[3190]: E1108 00:26:30.238309 3190 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c"} Nov 8 00:26:30.238396 kubelet[3190]: E1108 00:26:30.238335 3190 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"59621f83-2f27-42e2-8c18-c119c79f6847\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:30.238396 kubelet[3190]: E1108 00:26:30.238364 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"59621f83-2f27-42e2-8c18-c119c79f6847\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-qb5jn" podUID="59621f83-2f27-42e2-8c18-c119c79f6847" Nov 8 00:26:30.250722 containerd[1986]: time="2025-11-08T00:26:30.250680690Z" level=error msg="StopPodSandbox for \"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707\" failed" error="failed to destroy network for sandbox \"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:30.251251 kubelet[3190]: E1108 00:26:30.251086 3190 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" Nov 8 00:26:30.251251 kubelet[3190]: E1108 00:26:30.251138 3190 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707"} Nov 8 00:26:30.251251 kubelet[3190]: E1108 00:26:30.251181 3190 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"25c0194f-ade5-4e44-84ac-d9a48225182d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:30.251251 kubelet[3190]: E1108 00:26:30.251207 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"25c0194f-ade5-4e44-84ac-d9a48225182d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f5fdfdfd5-8qkgd" podUID="25c0194f-ade5-4e44-84ac-d9a48225182d" Nov 8 00:26:30.254376 containerd[1986]: time="2025-11-08T00:26:30.254329588Z" level=error msg="StopPodSandbox for \"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1\" failed" error="failed to destroy network for sandbox \"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:30.254839 kubelet[3190]: E1108 00:26:30.254681 3190 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" Nov 8 00:26:30.254839 kubelet[3190]: E1108 00:26:30.254740 3190 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1"} Nov 8 00:26:30.254839 kubelet[3190]: E1108 00:26:30.254783 3190 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2854d816-9155-4f6f-a8ba-78872a67ac8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:30.254839 kubelet[3190]: E1108 00:26:30.254812 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2854d816-9155-4f6f-a8ba-78872a67ac8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-756f78cd95-ppxpv" podUID="2854d816-9155-4f6f-a8ba-78872a67ac8c" Nov 8 00:26:30.255205 containerd[1986]: time="2025-11-08T00:26:30.255164543Z" level=error msg="StopPodSandbox for \"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff\" failed" error="failed to destroy network for sandbox \"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:30.255385 kubelet[3190]: E1108 00:26:30.255349 3190 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" Nov 8 00:26:30.255440 kubelet[3190]: E1108 00:26:30.255388 3190 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff"} Nov 8 00:26:30.255440 kubelet[3190]: E1108 00:26:30.255415 3190 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9ea252f2-da76-4fe0-acdf-c4fc18ba31ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:30.255524 kubelet[3190]: E1108 00:26:30.255437 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9ea252f2-da76-4fe0-acdf-c4fc18ba31ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-phbxx" podUID="9ea252f2-da76-4fe0-acdf-c4fc18ba31ac" Nov 8 00:26:30.456502 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c-shm.mount: Deactivated successfully. Nov 8 00:26:30.456848 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224-shm.mount: Deactivated successfully. Nov 8 00:26:36.818935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4212605506.mount: Deactivated successfully. Nov 8 00:26:36.923318 containerd[1986]: time="2025-11-08T00:26:36.896547093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:26:36.925027 containerd[1986]: time="2025-11-08T00:26:36.924600945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:36.946422 containerd[1986]: time="2025-11-08T00:26:36.946349410Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:36.948744 containerd[1986]: time="2025-11-08T00:26:36.947907618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:36.950733 containerd[1986]: time="2025-11-08T00:26:36.950687905Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.85817331s" Nov 8 00:26:36.950733 containerd[1986]: time="2025-11-08T00:26:36.950733513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:26:37.040729 containerd[1986]: time="2025-11-08T00:26:37.040677118Z" level=info msg="CreateContainer within sandbox \"126d493cf6968dd305472423c7b35fd9ec83f073375f870eb0c17e6068d13242\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:26:37.104029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3416505884.mount: Deactivated successfully. Nov 8 00:26:37.123611 containerd[1986]: time="2025-11-08T00:26:37.123555014Z" level=info msg="CreateContainer within sandbox \"126d493cf6968dd305472423c7b35fd9ec83f073375f870eb0c17e6068d13242\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"94ff8d0a787e939073ee992542da2e88b3038cd9895ab4fe0ee94b8a19f77f13\"" Nov 8 00:26:37.135467 containerd[1986]: time="2025-11-08T00:26:37.135084437Z" level=info msg="StartContainer for \"94ff8d0a787e939073ee992542da2e88b3038cd9895ab4fe0ee94b8a19f77f13\"" Nov 8 00:26:37.261499 systemd[1]: Started cri-containerd-94ff8d0a787e939073ee992542da2e88b3038cd9895ab4fe0ee94b8a19f77f13.scope - libcontainer container 94ff8d0a787e939073ee992542da2e88b3038cd9895ab4fe0ee94b8a19f77f13. Nov 8 00:26:37.325710 containerd[1986]: time="2025-11-08T00:26:37.323665323Z" level=info msg="StartContainer for \"94ff8d0a787e939073ee992542da2e88b3038cd9895ab4fe0ee94b8a19f77f13\" returns successfully" Nov 8 00:26:37.447112 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:26:37.448032 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:26:37.737008 containerd[1986]: time="2025-11-08T00:26:37.735467092Z" level=info msg="StopPodSandbox for \"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707\"" Nov 8 00:26:38.224816 containerd[1986]: 2025-11-08 00:26:37.891 [INFO][4581] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" Nov 8 00:26:38.224816 containerd[1986]: 2025-11-08 00:26:37.892 [INFO][4581] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" iface="eth0" netns="/var/run/netns/cni-88509fda-ca0e-0a47-b3cd-246d9ae1206e" Nov 8 00:26:38.224816 containerd[1986]: 2025-11-08 00:26:37.893 [INFO][4581] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" iface="eth0" netns="/var/run/netns/cni-88509fda-ca0e-0a47-b3cd-246d9ae1206e" Nov 8 00:26:38.224816 containerd[1986]: 2025-11-08 00:26:37.894 [INFO][4581] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" iface="eth0" netns="/var/run/netns/cni-88509fda-ca0e-0a47-b3cd-246d9ae1206e" Nov 8 00:26:38.224816 containerd[1986]: 2025-11-08 00:26:37.894 [INFO][4581] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" Nov 8 00:26:38.224816 containerd[1986]: 2025-11-08 00:26:37.894 [INFO][4581] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" Nov 8 00:26:38.224816 containerd[1986]: 2025-11-08 00:26:38.184 [INFO][4589] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" HandleID="k8s-pod-network.a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" Workload="ip--172--31--25--121-k8s-whisker--5f5fdfdfd5--8qkgd-eth0" Nov 8 00:26:38.224816 containerd[1986]: 2025-11-08 00:26:38.191 [INFO][4589] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:38.224816 containerd[1986]: 2025-11-08 00:26:38.193 [INFO][4589] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:38.224816 containerd[1986]: 2025-11-08 00:26:38.214 [WARNING][4589] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" HandleID="k8s-pod-network.a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" Workload="ip--172--31--25--121-k8s-whisker--5f5fdfdfd5--8qkgd-eth0" Nov 8 00:26:38.224816 containerd[1986]: 2025-11-08 00:26:38.215 [INFO][4589] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" HandleID="k8s-pod-network.a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" Workload="ip--172--31--25--121-k8s-whisker--5f5fdfdfd5--8qkgd-eth0" Nov 8 00:26:38.224816 containerd[1986]: 2025-11-08 00:26:38.218 [INFO][4589] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:38.224816 containerd[1986]: 2025-11-08 00:26:38.221 [INFO][4581] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" Nov 8 00:26:38.231566 containerd[1986]: time="2025-11-08T00:26:38.226166610Z" level=info msg="TearDown network for sandbox \"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707\" successfully" Nov 8 00:26:38.231566 containerd[1986]: time="2025-11-08T00:26:38.226204223Z" level=info msg="StopPodSandbox for \"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707\" returns successfully" Nov 8 00:26:38.231589 systemd[1]: run-netns-cni\x2d88509fda\x2dca0e\x2d0a47\x2db3cd\x2d246d9ae1206e.mount: Deactivated successfully. Nov 8 00:26:38.456243 kubelet[3190]: I1108 00:26:38.456193 3190 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/25c0194f-ade5-4e44-84ac-d9a48225182d-whisker-backend-key-pair\") pod \"25c0194f-ade5-4e44-84ac-d9a48225182d\" (UID: \"25c0194f-ade5-4e44-84ac-d9a48225182d\") " Nov 8 00:26:38.456243 kubelet[3190]: I1108 00:26:38.456254 3190 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25c0194f-ade5-4e44-84ac-d9a48225182d-whisker-ca-bundle\") pod \"25c0194f-ade5-4e44-84ac-d9a48225182d\" (UID: \"25c0194f-ade5-4e44-84ac-d9a48225182d\") " Nov 8 00:26:38.456243 kubelet[3190]: I1108 00:26:38.456325 3190 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjvlm\" (UniqueName: \"kubernetes.io/projected/25c0194f-ade5-4e44-84ac-d9a48225182d-kube-api-access-tjvlm\") pod \"25c0194f-ade5-4e44-84ac-d9a48225182d\" (UID: \"25c0194f-ade5-4e44-84ac-d9a48225182d\") " Nov 8 00:26:38.482749 kubelet[3190]: I1108 00:26:38.481362 3190 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25c0194f-ade5-4e44-84ac-d9a48225182d-kube-api-access-tjvlm" (OuterVolumeSpecName: "kube-api-access-tjvlm") pod "25c0194f-ade5-4e44-84ac-d9a48225182d" (UID: "25c0194f-ade5-4e44-84ac-d9a48225182d"). InnerVolumeSpecName "kube-api-access-tjvlm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:26:38.492309 kubelet[3190]: I1108 00:26:38.479390 3190 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25c0194f-ade5-4e44-84ac-d9a48225182d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "25c0194f-ade5-4e44-84ac-d9a48225182d" (UID: "25c0194f-ade5-4e44-84ac-d9a48225182d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:26:38.495316 kubelet[3190]: I1108 00:26:38.493704 3190 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25c0194f-ade5-4e44-84ac-d9a48225182d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "25c0194f-ade5-4e44-84ac-d9a48225182d" (UID: "25c0194f-ade5-4e44-84ac-d9a48225182d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:26:38.493968 systemd[1]: var-lib-kubelet-pods-25c0194f\x2dade5\x2d4e44\x2d84ac\x2dd9a48225182d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtjvlm.mount: Deactivated successfully. Nov 8 00:26:38.502608 systemd[1]: var-lib-kubelet-pods-25c0194f\x2dade5\x2d4e44\x2d84ac\x2dd9a48225182d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:26:38.556989 kubelet[3190]: I1108 00:26:38.556886 3190 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/25c0194f-ade5-4e44-84ac-d9a48225182d-whisker-backend-key-pair\") on node \"ip-172-31-25-121\" DevicePath \"\"" Nov 8 00:26:38.556989 kubelet[3190]: I1108 00:26:38.556924 3190 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25c0194f-ade5-4e44-84ac-d9a48225182d-whisker-ca-bundle\") on node \"ip-172-31-25-121\" DevicePath \"\"" Nov 8 00:26:38.556989 kubelet[3190]: I1108 00:26:38.556933 3190 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tjvlm\" (UniqueName: \"kubernetes.io/projected/25c0194f-ade5-4e44-84ac-d9a48225182d-kube-api-access-tjvlm\") on node \"ip-172-31-25-121\" DevicePath \"\"" Nov 8 00:26:39.210128 systemd[1]: Removed slice kubepods-besteffort-pod25c0194f_ade5_4e44_84ac_d9a48225182d.slice - libcontainer container kubepods-besteffort-pod25c0194f_ade5_4e44_84ac_d9a48225182d.slice. Nov 8 00:26:39.270114 kubelet[3190]: I1108 00:26:39.262082 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-x5qf9" podStartSLOduration=3.307455553 podStartE2EDuration="22.255142166s" podCreationTimestamp="2025-11-08 00:26:17 +0000 UTC" firstStartedPulling="2025-11-08 00:26:18.00411356 +0000 UTC m=+24.487007929" lastFinishedPulling="2025-11-08 00:26:36.951800172 +0000 UTC m=+43.434694542" observedRunningTime="2025-11-08 00:26:38.262473911 +0000 UTC m=+44.745368302" watchObservedRunningTime="2025-11-08 00:26:39.255142166 +0000 UTC m=+45.738036557" Nov 8 00:26:39.373572 systemd[1]: Created slice kubepods-besteffort-podee745a66_da8b_4b06_b62f_77bdcb118c17.slice - libcontainer container kubepods-besteffort-podee745a66_da8b_4b06_b62f_77bdcb118c17.slice. Nov 8 00:26:39.464099 kubelet[3190]: I1108 00:26:39.463959 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee745a66-da8b-4b06-b62f-77bdcb118c17-whisker-ca-bundle\") pod \"whisker-5b76967f45-ch758\" (UID: \"ee745a66-da8b-4b06-b62f-77bdcb118c17\") " pod="calico-system/whisker-5b76967f45-ch758" Nov 8 00:26:39.464099 kubelet[3190]: I1108 00:26:39.464019 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xlq5\" (UniqueName: \"kubernetes.io/projected/ee745a66-da8b-4b06-b62f-77bdcb118c17-kube-api-access-9xlq5\") pod \"whisker-5b76967f45-ch758\" (UID: \"ee745a66-da8b-4b06-b62f-77bdcb118c17\") " pod="calico-system/whisker-5b76967f45-ch758" Nov 8 00:26:39.464099 kubelet[3190]: I1108 00:26:39.464051 3190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ee745a66-da8b-4b06-b62f-77bdcb118c17-whisker-backend-key-pair\") pod \"whisker-5b76967f45-ch758\" (UID: \"ee745a66-da8b-4b06-b62f-77bdcb118c17\") " pod="calico-system/whisker-5b76967f45-ch758" Nov 8 00:26:39.530788 systemd[1]: Started sshd@9-172.31.25.121:22-139.178.89.65:58088.service - OpenSSH per-connection server daemon (139.178.89.65:58088). Nov 8 00:26:39.733107 containerd[1986]: time="2025-11-08T00:26:39.732883009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b76967f45-ch758,Uid:ee745a66-da8b-4b06-b62f-77bdcb118c17,Namespace:calico-system,Attempt:0,}" Nov 8 00:26:39.741803 kubelet[3190]: I1108 00:26:39.741756 3190 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25c0194f-ade5-4e44-84ac-d9a48225182d" path="/var/lib/kubelet/pods/25c0194f-ade5-4e44-84ac-d9a48225182d/volumes" Nov 8 00:26:39.801939 sshd[4738]: Accepted publickey for core from 139.178.89.65 port 58088 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:26:39.811635 sshd[4738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:39.837198 systemd-logind[1962]: New session 10 of user core. Nov 8 00:26:39.841506 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:26:40.192173 (udev-worker)[4558]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:26:40.222738 systemd-networkd[1785]: cali1f3ff42d1dd: Link UP Nov 8 00:26:40.227305 systemd-networkd[1785]: cali1f3ff42d1dd: Gained carrier Nov 8 00:26:40.272895 containerd[1986]: 2025-11-08 00:26:39.953 [INFO][4746] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:26:40.272895 containerd[1986]: 2025-11-08 00:26:39.993 [INFO][4746] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--121-k8s-whisker--5b76967f45--ch758-eth0 whisker-5b76967f45- calico-system ee745a66-da8b-4b06-b62f-77bdcb118c17 939 0 2025-11-08 00:26:39 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b76967f45 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-25-121 whisker-5b76967f45-ch758 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1f3ff42d1dd [] [] }} ContainerID="2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e" Namespace="calico-system" Pod="whisker-5b76967f45-ch758" WorkloadEndpoint="ip--172--31--25--121-k8s-whisker--5b76967f45--ch758-" Nov 8 00:26:40.272895 containerd[1986]: 2025-11-08 00:26:39.993 [INFO][4746] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e" Namespace="calico-system" Pod="whisker-5b76967f45-ch758" WorkloadEndpoint="ip--172--31--25--121-k8s-whisker--5b76967f45--ch758-eth0" Nov 8 00:26:40.272895 containerd[1986]: 2025-11-08 00:26:40.070 [INFO][4765] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e" HandleID="k8s-pod-network.2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e" Workload="ip--172--31--25--121-k8s-whisker--5b76967f45--ch758-eth0" Nov 8 00:26:40.272895 containerd[1986]: 2025-11-08 00:26:40.073 [INFO][4765] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e" HandleID="k8s-pod-network.2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e" Workload="ip--172--31--25--121-k8s-whisker--5b76967f45--ch758-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fd20), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-121", "pod":"whisker-5b76967f45-ch758", "timestamp":"2025-11-08 00:26:40.070885931 +0000 UTC"}, Hostname:"ip-172-31-25-121", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:40.272895 containerd[1986]: 2025-11-08 00:26:40.073 [INFO][4765] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:40.272895 containerd[1986]: 2025-11-08 00:26:40.073 [INFO][4765] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:40.272895 containerd[1986]: 2025-11-08 00:26:40.073 [INFO][4765] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-121' Nov 8 00:26:40.272895 containerd[1986]: 2025-11-08 00:26:40.102 [INFO][4765] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e" host="ip-172-31-25-121" Nov 8 00:26:40.272895 containerd[1986]: 2025-11-08 00:26:40.130 [INFO][4765] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-121" Nov 8 00:26:40.272895 containerd[1986]: 2025-11-08 00:26:40.141 [INFO][4765] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="ip-172-31-25-121" Nov 8 00:26:40.272895 containerd[1986]: 2025-11-08 00:26:40.146 [INFO][4765] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="ip-172-31-25-121" Nov 8 00:26:40.272895 containerd[1986]: 2025-11-08 00:26:40.149 [INFO][4765] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ip-172-31-25-121" Nov 8 00:26:40.272895 containerd[1986]: 2025-11-08 00:26:40.149 [INFO][4765] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e" host="ip-172-31-25-121" Nov 8 00:26:40.272895 containerd[1986]: 2025-11-08 00:26:40.151 [INFO][4765] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e Nov 8 00:26:40.272895 containerd[1986]: 2025-11-08 00:26:40.160 [INFO][4765] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e" host="ip-172-31-25-121" Nov 8 00:26:40.272895 containerd[1986]: 2025-11-08 00:26:40.171 [INFO][4765] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.65/26] block=192.168.65.64/26 handle="k8s-pod-network.2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e" host="ip-172-31-25-121" Nov 8 00:26:40.272895 containerd[1986]: 2025-11-08 00:26:40.171 [INFO][4765] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.65/26] handle="k8s-pod-network.2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e" host="ip-172-31-25-121" Nov 8 00:26:40.272895 containerd[1986]: 2025-11-08 00:26:40.171 [INFO][4765] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:40.272895 containerd[1986]: 2025-11-08 00:26:40.171 [INFO][4765] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.65/26] IPv6=[] ContainerID="2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e" HandleID="k8s-pod-network.2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e" Workload="ip--172--31--25--121-k8s-whisker--5b76967f45--ch758-eth0" Nov 8 00:26:40.277282 containerd[1986]: 2025-11-08 00:26:40.178 [INFO][4746] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e" Namespace="calico-system" Pod="whisker-5b76967f45-ch758" WorkloadEndpoint="ip--172--31--25--121-k8s-whisker--5b76967f45--ch758-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-whisker--5b76967f45--ch758-eth0", GenerateName:"whisker-5b76967f45-", Namespace:"calico-system", SelfLink:"", UID:"ee745a66-da8b-4b06-b62f-77bdcb118c17", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b76967f45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"", Pod:"whisker-5b76967f45-ch758", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.65.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1f3ff42d1dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:40.277282 containerd[1986]: 2025-11-08 00:26:40.179 [INFO][4746] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.65/32] ContainerID="2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e" Namespace="calico-system" Pod="whisker-5b76967f45-ch758" WorkloadEndpoint="ip--172--31--25--121-k8s-whisker--5b76967f45--ch758-eth0" Nov 8 00:26:40.277282 containerd[1986]: 2025-11-08 00:26:40.179 [INFO][4746] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f3ff42d1dd ContainerID="2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e" Namespace="calico-system" Pod="whisker-5b76967f45-ch758" WorkloadEndpoint="ip--172--31--25--121-k8s-whisker--5b76967f45--ch758-eth0" Nov 8 00:26:40.277282 containerd[1986]: 2025-11-08 00:26:40.206 [INFO][4746] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e" Namespace="calico-system" Pod="whisker-5b76967f45-ch758" WorkloadEndpoint="ip--172--31--25--121-k8s-whisker--5b76967f45--ch758-eth0" Nov 8 00:26:40.277282 containerd[1986]: 2025-11-08 00:26:40.206 [INFO][4746] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e" Namespace="calico-system" Pod="whisker-5b76967f45-ch758" WorkloadEndpoint="ip--172--31--25--121-k8s-whisker--5b76967f45--ch758-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-whisker--5b76967f45--ch758-eth0", GenerateName:"whisker-5b76967f45-", Namespace:"calico-system", SelfLink:"", UID:"ee745a66-da8b-4b06-b62f-77bdcb118c17", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b76967f45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e", Pod:"whisker-5b76967f45-ch758", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.65.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1f3ff42d1dd", MAC:"12:59:75:4e:94:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:40.277282 containerd[1986]: 2025-11-08 00:26:40.259 [INFO][4746] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e" Namespace="calico-system" Pod="whisker-5b76967f45-ch758" WorkloadEndpoint="ip--172--31--25--121-k8s-whisker--5b76967f45--ch758-eth0" Nov 8 00:26:40.365313 containerd[1986]: time="2025-11-08T00:26:40.362502861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:40.365313 containerd[1986]: time="2025-11-08T00:26:40.362712864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:40.365313 containerd[1986]: time="2025-11-08T00:26:40.362744813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:40.365313 containerd[1986]: time="2025-11-08T00:26:40.362897732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:40.422636 systemd[1]: run-containerd-runc-k8s.io-2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e-runc.R9z6t5.mount: Deactivated successfully. Nov 8 00:26:40.432545 systemd[1]: Started cri-containerd-2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e.scope - libcontainer container 2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e. Nov 8 00:26:40.580707 containerd[1986]: time="2025-11-08T00:26:40.580641652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b76967f45-ch758,Uid:ee745a66-da8b-4b06-b62f-77bdcb118c17,Namespace:calico-system,Attempt:0,} returns sandbox id \"2314361fed27114ff337e320dbbe0160d8f776bb9a919ec746bff1e9ec9aee5e\"" Nov 8 00:26:40.584926 containerd[1986]: time="2025-11-08T00:26:40.584587164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:26:40.601981 sshd[4738]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:40.606676 systemd[1]: sshd@9-172.31.25.121:22-139.178.89.65:58088.service: Deactivated successfully. Nov 8 00:26:40.610322 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:26:40.613497 systemd-logind[1962]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:26:40.616948 systemd-logind[1962]: Removed session 10. Nov 8 00:26:40.906951 containerd[1986]: time="2025-11-08T00:26:40.906755777Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:40.932261 containerd[1986]: time="2025-11-08T00:26:40.908916001Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:26:40.933020 containerd[1986]: time="2025-11-08T00:26:40.909215339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:26:40.933100 kubelet[3190]: E1108 00:26:40.932700 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:26:40.933100 kubelet[3190]: E1108 00:26:40.932788 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:26:40.938309 kubelet[3190]: E1108 00:26:40.938198 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5b76967f45-ch758_calico-system(ee745a66-da8b-4b06-b62f-77bdcb118c17): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:40.940027 containerd[1986]: time="2025-11-08T00:26:40.939885005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:26:41.024723 kubelet[3190]: I1108 00:26:41.024648 3190 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:26:41.259785 containerd[1986]: time="2025-11-08T00:26:41.259629263Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:41.262789 containerd[1986]: time="2025-11-08T00:26:41.262701268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:26:41.263426 containerd[1986]: time="2025-11-08T00:26:41.262713795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:26:41.263505 kubelet[3190]: E1108 00:26:41.263084 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:26:41.263505 kubelet[3190]: E1108 00:26:41.263154 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:26:41.263505 kubelet[3190]: E1108 00:26:41.263404 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5b76967f45-ch758_calico-system(ee745a66-da8b-4b06-b62f-77bdcb118c17): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:41.263662 kubelet[3190]: E1108 00:26:41.263568 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b76967f45-ch758" podUID="ee745a66-da8b-4b06-b62f-77bdcb118c17" Nov 8 00:26:41.697147 containerd[1986]: time="2025-11-08T00:26:41.696790133Z" level=info msg="StopPodSandbox for \"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c\"" Nov 8 00:26:41.697956 containerd[1986]: time="2025-11-08T00:26:41.697904441Z" level=info msg="StopPodSandbox for \"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc\"" Nov 8 00:26:41.860824 containerd[1986]: 2025-11-08 00:26:41.786 [INFO][4923] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" Nov 8 00:26:41.860824 containerd[1986]: 2025-11-08 00:26:41.786 [INFO][4923] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" iface="eth0" netns="/var/run/netns/cni-413afbea-2c2c-99f2-24f4-8803366f3e68" Nov 8 00:26:41.860824 containerd[1986]: 2025-11-08 00:26:41.789 [INFO][4923] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" iface="eth0" netns="/var/run/netns/cni-413afbea-2c2c-99f2-24f4-8803366f3e68" Nov 8 00:26:41.860824 containerd[1986]: 2025-11-08 00:26:41.798 [INFO][4923] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" iface="eth0" netns="/var/run/netns/cni-413afbea-2c2c-99f2-24f4-8803366f3e68" Nov 8 00:26:41.860824 containerd[1986]: 2025-11-08 00:26:41.798 [INFO][4923] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" Nov 8 00:26:41.860824 containerd[1986]: 2025-11-08 00:26:41.798 [INFO][4923] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" Nov 8 00:26:41.860824 containerd[1986]: 2025-11-08 00:26:41.834 [INFO][4937] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" HandleID="k8s-pod-network.2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" Workload="ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-eth0" Nov 8 00:26:41.860824 containerd[1986]: 2025-11-08 00:26:41.834 [INFO][4937] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:41.860824 containerd[1986]: 2025-11-08 00:26:41.834 [INFO][4937] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:41.860824 containerd[1986]: 2025-11-08 00:26:41.848 [WARNING][4937] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" HandleID="k8s-pod-network.2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" Workload="ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-eth0" Nov 8 00:26:41.860824 containerd[1986]: 2025-11-08 00:26:41.849 [INFO][4937] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" HandleID="k8s-pod-network.2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" Workload="ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-eth0" Nov 8 00:26:41.860824 containerd[1986]: 2025-11-08 00:26:41.852 [INFO][4937] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:41.860824 containerd[1986]: 2025-11-08 00:26:41.857 [INFO][4923] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" Nov 8 00:26:41.863359 containerd[1986]: time="2025-11-08T00:26:41.861549262Z" level=info msg="TearDown network for sandbox \"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc\" successfully" Nov 8 00:26:41.863359 containerd[1986]: time="2025-11-08T00:26:41.861772813Z" level=info msg="StopPodSandbox for \"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc\" returns successfully" Nov 8 00:26:41.870101 systemd[1]: run-netns-cni\x2d413afbea\x2d2c2c\x2d99f2\x2d24f4\x2d8803366f3e68.mount: Deactivated successfully. Nov 8 00:26:41.871924 containerd[1986]: time="2025-11-08T00:26:41.871788034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vhxwd,Uid:75831af6-29ac-43d1-829f-acf86112d6f8,Namespace:kube-system,Attempt:1,}" Nov 8 00:26:41.889869 containerd[1986]: 2025-11-08 00:26:41.796 [INFO][4922] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" Nov 8 00:26:41.889869 containerd[1986]: 2025-11-08 00:26:41.796 [INFO][4922] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" iface="eth0" netns="/var/run/netns/cni-40448e41-86d7-472a-dcf2-66bc2f5030f6" Nov 8 00:26:41.889869 containerd[1986]: 2025-11-08 00:26:41.798 [INFO][4922] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" iface="eth0" netns="/var/run/netns/cni-40448e41-86d7-472a-dcf2-66bc2f5030f6" Nov 8 00:26:41.889869 containerd[1986]: 2025-11-08 00:26:41.799 [INFO][4922] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" iface="eth0" netns="/var/run/netns/cni-40448e41-86d7-472a-dcf2-66bc2f5030f6" Nov 8 00:26:41.889869 containerd[1986]: 2025-11-08 00:26:41.799 [INFO][4922] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" Nov 8 00:26:41.889869 containerd[1986]: 2025-11-08 00:26:41.799 [INFO][4922] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" Nov 8 00:26:41.889869 containerd[1986]: 2025-11-08 00:26:41.854 [INFO][4939] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" HandleID="k8s-pod-network.fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" Workload="ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-eth0" Nov 8 00:26:41.889869 containerd[1986]: 2025-11-08 00:26:41.854 [INFO][4939] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:41.889869 containerd[1986]: 2025-11-08 00:26:41.854 [INFO][4939] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:41.889869 containerd[1986]: 2025-11-08 00:26:41.876 [WARNING][4939] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" HandleID="k8s-pod-network.fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" Workload="ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-eth0" Nov 8 00:26:41.889869 containerd[1986]: 2025-11-08 00:26:41.876 [INFO][4939] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" HandleID="k8s-pod-network.fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" Workload="ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-eth0" Nov 8 00:26:41.889869 containerd[1986]: 2025-11-08 00:26:41.881 [INFO][4939] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:41.889869 containerd[1986]: 2025-11-08 00:26:41.885 [INFO][4922] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" Nov 8 00:26:41.891270 containerd[1986]: time="2025-11-08T00:26:41.890382166Z" level=info msg="TearDown network for sandbox \"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c\" successfully" Nov 8 00:26:41.891270 containerd[1986]: time="2025-11-08T00:26:41.890412941Z" level=info msg="StopPodSandbox for \"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c\" returns successfully" Nov 8 00:26:41.896672 containerd[1986]: time="2025-11-08T00:26:41.896332260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-qb5jn,Uid:59621f83-2f27-42e2-8c18-c119c79f6847,Namespace:calico-system,Attempt:1,}" Nov 8 00:26:41.898545 systemd[1]: run-netns-cni\x2d40448e41\x2d86d7\x2d472a\x2ddcf2\x2d66bc2f5030f6.mount: Deactivated successfully. Nov 8 00:26:42.182124 systemd-networkd[1785]: cali3c1f96d2867: Link UP Nov 8 00:26:42.185817 systemd-networkd[1785]: cali3c1f96d2867: Gained carrier Nov 8 00:26:42.228774 systemd-networkd[1785]: cali1f3ff42d1dd: Gained IPv6LL Nov 8 00:26:42.238414 containerd[1986]: 2025-11-08 00:26:41.965 [INFO][4951] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:26:42.238414 containerd[1986]: 2025-11-08 00:26:42.003 [INFO][4951] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-eth0 coredns-66bc5c9577- kube-system 75831af6-29ac-43d1-829f-acf86112d6f8 994 0 2025-11-08 00:26:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-25-121 coredns-66bc5c9577-vhxwd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3c1f96d2867 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241" Namespace="kube-system" Pod="coredns-66bc5c9577-vhxwd" WorkloadEndpoint="ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-" Nov 8 00:26:42.238414 containerd[1986]: 2025-11-08 00:26:42.003 [INFO][4951] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241" Namespace="kube-system" Pod="coredns-66bc5c9577-vhxwd" WorkloadEndpoint="ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-eth0" Nov 8 00:26:42.238414 containerd[1986]: 2025-11-08 00:26:42.103 [INFO][4983] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241" HandleID="k8s-pod-network.55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241" Workload="ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-eth0" Nov 8 00:26:42.238414 containerd[1986]: 2025-11-08 00:26:42.103 [INFO][4983] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241" HandleID="k8s-pod-network.55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241" Workload="ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000377240), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-25-121", "pod":"coredns-66bc5c9577-vhxwd", "timestamp":"2025-11-08 00:26:42.103602561 +0000 UTC"}, Hostname:"ip-172-31-25-121", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:42.238414 containerd[1986]: 2025-11-08 00:26:42.105 [INFO][4983] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:42.238414 containerd[1986]: 2025-11-08 00:26:42.105 [INFO][4983] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:42.238414 containerd[1986]: 2025-11-08 00:26:42.105 [INFO][4983] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-121' Nov 8 00:26:42.238414 containerd[1986]: 2025-11-08 00:26:42.130 [INFO][4983] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241" host="ip-172-31-25-121" Nov 8 00:26:42.238414 containerd[1986]: 2025-11-08 00:26:42.137 [INFO][4983] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-121" Nov 8 00:26:42.238414 containerd[1986]: 2025-11-08 00:26:42.144 [INFO][4983] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="ip-172-31-25-121" Nov 8 00:26:42.238414 containerd[1986]: 2025-11-08 00:26:42.147 [INFO][4983] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="ip-172-31-25-121" Nov 8 00:26:42.238414 containerd[1986]: 2025-11-08 00:26:42.150 [INFO][4983] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ip-172-31-25-121" Nov 8 00:26:42.238414 containerd[1986]: 2025-11-08 00:26:42.150 [INFO][4983] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241" host="ip-172-31-25-121" Nov 8 00:26:42.238414 containerd[1986]: 2025-11-08 00:26:42.153 [INFO][4983] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241 Nov 8 00:26:42.238414 containerd[1986]: 2025-11-08 00:26:42.159 [INFO][4983] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241" host="ip-172-31-25-121" Nov 8 00:26:42.238414 containerd[1986]: 2025-11-08 00:26:42.170 [INFO][4983] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.66/26] block=192.168.65.64/26 handle="k8s-pod-network.55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241" host="ip-172-31-25-121" Nov 8 00:26:42.238414 containerd[1986]: 2025-11-08 00:26:42.171 [INFO][4983] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.66/26] handle="k8s-pod-network.55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241" host="ip-172-31-25-121" Nov 8 00:26:42.238414 containerd[1986]: 2025-11-08 00:26:42.171 [INFO][4983] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:42.238414 containerd[1986]: 2025-11-08 00:26:42.171 [INFO][4983] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.66/26] IPv6=[] ContainerID="55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241" HandleID="k8s-pod-network.55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241" Workload="ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-eth0" Nov 8 00:26:42.243792 containerd[1986]: 2025-11-08 00:26:42.176 [INFO][4951] cni-plugin/k8s.go 418: Populated endpoint ContainerID="55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241" Namespace="kube-system" Pod="coredns-66bc5c9577-vhxwd" WorkloadEndpoint="ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"75831af6-29ac-43d1-829f-acf86112d6f8", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"", Pod:"coredns-66bc5c9577-vhxwd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c1f96d2867", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:42.243792 containerd[1986]: 2025-11-08 00:26:42.176 [INFO][4951] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.66/32] ContainerID="55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241" Namespace="kube-system" Pod="coredns-66bc5c9577-vhxwd" WorkloadEndpoint="ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-eth0" Nov 8 00:26:42.243792 containerd[1986]: 2025-11-08 00:26:42.176 [INFO][4951] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c1f96d2867 ContainerID="55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241" Namespace="kube-system" Pod="coredns-66bc5c9577-vhxwd" WorkloadEndpoint="ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-eth0" Nov 8 00:26:42.243792 containerd[1986]: 2025-11-08 00:26:42.186 [INFO][4951] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241" Namespace="kube-system" Pod="coredns-66bc5c9577-vhxwd" WorkloadEndpoint="ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-eth0" Nov 8 00:26:42.243792 containerd[1986]: 2025-11-08 00:26:42.188 [INFO][4951] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241" Namespace="kube-system" Pod="coredns-66bc5c9577-vhxwd" WorkloadEndpoint="ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"75831af6-29ac-43d1-829f-acf86112d6f8", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241", Pod:"coredns-66bc5c9577-vhxwd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c1f96d2867", MAC:"1e:0c:a1:53:6a:71", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:42.243792 containerd[1986]: 2025-11-08 00:26:42.211 [INFO][4951] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241" Namespace="kube-system" Pod="coredns-66bc5c9577-vhxwd" WorkloadEndpoint="ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-eth0" Nov 8 00:26:42.260780 kubelet[3190]: E1108 00:26:42.260598 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b76967f45-ch758" podUID="ee745a66-da8b-4b06-b62f-77bdcb118c17" Nov 8 00:26:42.308981 containerd[1986]: time="2025-11-08T00:26:42.308588266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:42.309450 containerd[1986]: time="2025-11-08T00:26:42.308957351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:42.309450 containerd[1986]: time="2025-11-08T00:26:42.309193548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:42.310747 containerd[1986]: time="2025-11-08T00:26:42.310442958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:42.366825 systemd-networkd[1785]: calic2a0c1ada60: Link UP Nov 8 00:26:42.381487 systemd-networkd[1785]: calic2a0c1ada60: Gained carrier Nov 8 00:26:42.400762 systemd[1]: Started cri-containerd-55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241.scope - libcontainer container 55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241. Nov 8 00:26:42.416119 containerd[1986]: 2025-11-08 00:26:42.017 [INFO][4966] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:26:42.416119 containerd[1986]: 2025-11-08 00:26:42.041 [INFO][4966] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-eth0 goldmane-7c778bb748- calico-system 59621f83-2f27-42e2-8c18-c119c79f6847 995 0 2025-11-08 00:26:15 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-25-121 goldmane-7c778bb748-qb5jn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic2a0c1ada60 [] [] }} ContainerID="06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504" Namespace="calico-system" Pod="goldmane-7c778bb748-qb5jn" WorkloadEndpoint="ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-" Nov 8 00:26:42.416119 containerd[1986]: 2025-11-08 00:26:42.043 [INFO][4966] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504" Namespace="calico-system" Pod="goldmane-7c778bb748-qb5jn" WorkloadEndpoint="ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-eth0" Nov 8 00:26:42.416119 containerd[1986]: 2025-11-08 00:26:42.119 [INFO][4989] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504" HandleID="k8s-pod-network.06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504" Workload="ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-eth0" Nov 8 00:26:42.416119 containerd[1986]: 2025-11-08 00:26:42.119 [INFO][4989] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504" HandleID="k8s-pod-network.06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504" Workload="ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037d9e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-121", "pod":"goldmane-7c778bb748-qb5jn", "timestamp":"2025-11-08 00:26:42.11946231 +0000 UTC"}, Hostname:"ip-172-31-25-121", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:42.416119 containerd[1986]: 2025-11-08 00:26:42.119 [INFO][4989] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:42.416119 containerd[1986]: 2025-11-08 00:26:42.172 [INFO][4989] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:42.416119 containerd[1986]: 2025-11-08 00:26:42.172 [INFO][4989] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-121' Nov 8 00:26:42.416119 containerd[1986]: 2025-11-08 00:26:42.233 [INFO][4989] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504" host="ip-172-31-25-121" Nov 8 00:26:42.416119 containerd[1986]: 2025-11-08 00:26:42.248 [INFO][4989] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-121" Nov 8 00:26:42.416119 containerd[1986]: 2025-11-08 00:26:42.277 [INFO][4989] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="ip-172-31-25-121" Nov 8 00:26:42.416119 containerd[1986]: 2025-11-08 00:26:42.296 [INFO][4989] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="ip-172-31-25-121" Nov 8 00:26:42.416119 containerd[1986]: 2025-11-08 00:26:42.309 [INFO][4989] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ip-172-31-25-121" Nov 8 00:26:42.416119 containerd[1986]: 2025-11-08 00:26:42.309 [INFO][4989] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504" host="ip-172-31-25-121" Nov 8 00:26:42.416119 containerd[1986]: 2025-11-08 00:26:42.314 [INFO][4989] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504 Nov 8 00:26:42.416119 containerd[1986]: 2025-11-08 00:26:42.319 [INFO][4989] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504" host="ip-172-31-25-121" Nov 8 00:26:42.416119 containerd[1986]: 2025-11-08 00:26:42.336 [INFO][4989] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.67/26] block=192.168.65.64/26 handle="k8s-pod-network.06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504" host="ip-172-31-25-121" Nov 8 00:26:42.416119 containerd[1986]: 2025-11-08 00:26:42.336 [INFO][4989] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.67/26] handle="k8s-pod-network.06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504" host="ip-172-31-25-121" Nov 8 00:26:42.416119 containerd[1986]: 2025-11-08 00:26:42.336 [INFO][4989] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:42.416119 containerd[1986]: 2025-11-08 00:26:42.336 [INFO][4989] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.67/26] IPv6=[] ContainerID="06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504" HandleID="k8s-pod-network.06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504" Workload="ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-eth0" Nov 8 00:26:42.419033 containerd[1986]: 2025-11-08 00:26:42.344 [INFO][4966] cni-plugin/k8s.go 418: Populated endpoint ContainerID="06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504" Namespace="calico-system" Pod="goldmane-7c778bb748-qb5jn" WorkloadEndpoint="ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"59621f83-2f27-42e2-8c18-c119c79f6847", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"", Pod:"goldmane-7c778bb748-qb5jn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.65.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic2a0c1ada60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:42.419033 containerd[1986]: 2025-11-08 00:26:42.349 [INFO][4966] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.67/32] ContainerID="06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504" Namespace="calico-system" Pod="goldmane-7c778bb748-qb5jn" WorkloadEndpoint="ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-eth0" Nov 8 00:26:42.419033 containerd[1986]: 2025-11-08 00:26:42.351 [INFO][4966] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2a0c1ada60 ContainerID="06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504" Namespace="calico-system" Pod="goldmane-7c778bb748-qb5jn" WorkloadEndpoint="ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-eth0" Nov 8 00:26:42.419033 containerd[1986]: 2025-11-08 00:26:42.386 [INFO][4966] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504" Namespace="calico-system" Pod="goldmane-7c778bb748-qb5jn" WorkloadEndpoint="ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-eth0" Nov 8 00:26:42.419033 containerd[1986]: 2025-11-08 00:26:42.388 [INFO][4966] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504" Namespace="calico-system" Pod="goldmane-7c778bb748-qb5jn" WorkloadEndpoint="ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"59621f83-2f27-42e2-8c18-c119c79f6847", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504", Pod:"goldmane-7c778bb748-qb5jn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.65.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic2a0c1ada60", MAC:"ea:3f:69:e7:e5:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:42.419033 containerd[1986]: 2025-11-08 00:26:42.408 [INFO][4966] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504" Namespace="calico-system" Pod="goldmane-7c778bb748-qb5jn" WorkloadEndpoint="ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-eth0" Nov 8 00:26:42.457949 containerd[1986]: time="2025-11-08T00:26:42.457464708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:42.458505 containerd[1986]: time="2025-11-08T00:26:42.458171758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:42.458505 containerd[1986]: time="2025-11-08T00:26:42.458197573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:42.458806 containerd[1986]: time="2025-11-08T00:26:42.458484526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:42.492559 systemd[1]: Started cri-containerd-06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504.scope - libcontainer container 06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504. Nov 8 00:26:42.506439 containerd[1986]: time="2025-11-08T00:26:42.506394080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vhxwd,Uid:75831af6-29ac-43d1-829f-acf86112d6f8,Namespace:kube-system,Attempt:1,} returns sandbox id \"55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241\"" Nov 8 00:26:42.520013 containerd[1986]: time="2025-11-08T00:26:42.519761815Z" level=info msg="CreateContainer within sandbox \"55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:26:42.576262 containerd[1986]: time="2025-11-08T00:26:42.576122114Z" level=info msg="CreateContainer within sandbox \"55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a111e1fd28c7cd283276bf6d4a469b82b20e34b3d7221df75d20b98682dd0400\"" Nov 8 00:26:42.585224 containerd[1986]: time="2025-11-08T00:26:42.581657813Z" level=info msg="StartContainer for \"a111e1fd28c7cd283276bf6d4a469b82b20e34b3d7221df75d20b98682dd0400\"" Nov 8 00:26:42.641581 systemd[1]: Started cri-containerd-a111e1fd28c7cd283276bf6d4a469b82b20e34b3d7221df75d20b98682dd0400.scope - libcontainer container a111e1fd28c7cd283276bf6d4a469b82b20e34b3d7221df75d20b98682dd0400. Nov 8 00:26:42.691916 containerd[1986]: time="2025-11-08T00:26:42.691713932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-qb5jn,Uid:59621f83-2f27-42e2-8c18-c119c79f6847,Namespace:calico-system,Attempt:1,} returns sandbox id \"06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504\"" Nov 8 00:26:42.694835 containerd[1986]: time="2025-11-08T00:26:42.694474944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:26:42.726729 containerd[1986]: time="2025-11-08T00:26:42.724464201Z" level=info msg="StartContainer for \"a111e1fd28c7cd283276bf6d4a469b82b20e34b3d7221df75d20b98682dd0400\" returns successfully" Nov 8 00:26:42.777312 kernel: bpftool[5143]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:26:43.044836 containerd[1986]: time="2025-11-08T00:26:43.044328779Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:43.047349 containerd[1986]: time="2025-11-08T00:26:43.046650876Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:26:43.047349 containerd[1986]: time="2025-11-08T00:26:43.046778079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:26:43.047529 kubelet[3190]: E1108 00:26:43.047151 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:26:43.047529 kubelet[3190]: E1108 00:26:43.047222 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:26:43.047811 kubelet[3190]: E1108 00:26:43.047780 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-qb5jn_calico-system(59621f83-2f27-42e2-8c18-c119c79f6847): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:43.047898 kubelet[3190]: E1108 00:26:43.047850 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qb5jn" podUID="59621f83-2f27-42e2-8c18-c119c79f6847" Nov 8 00:26:43.198905 systemd-networkd[1785]: vxlan.calico: Link UP Nov 8 00:26:43.199095 systemd-networkd[1785]: vxlan.calico: Gained carrier Nov 8 00:26:43.247801 kubelet[3190]: E1108 00:26:43.247761 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qb5jn" podUID="59621f83-2f27-42e2-8c18-c119c79f6847" Nov 8 00:26:43.271819 (udev-worker)[4557]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:26:43.278516 kubelet[3190]: I1108 00:26:43.278436 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vhxwd" podStartSLOduration=43.278413079 podStartE2EDuration="43.278413079s" podCreationTimestamp="2025-11-08 00:26:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:26:43.274059852 +0000 UTC m=+49.756954242" watchObservedRunningTime="2025-11-08 00:26:43.278413079 +0000 UTC m=+49.761307470" Nov 8 00:26:43.567484 systemd-networkd[1785]: cali3c1f96d2867: Gained IPv6LL Nov 8 00:26:43.687248 containerd[1986]: time="2025-11-08T00:26:43.687151149Z" level=info msg="StopPodSandbox for \"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff\"" Nov 8 00:26:43.689628 containerd[1986]: time="2025-11-08T00:26:43.689221706Z" level=info msg="StopPodSandbox for \"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1\"" Nov 8 00:26:43.831237 containerd[1986]: 2025-11-08 00:26:43.764 [INFO][5236] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" Nov 8 00:26:43.831237 containerd[1986]: 2025-11-08 00:26:43.764 [INFO][5236] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" iface="eth0" netns="/var/run/netns/cni-175dddd0-5b14-6218-9e3f-b126f5dfc065" Nov 8 00:26:43.831237 containerd[1986]: 2025-11-08 00:26:43.765 [INFO][5236] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" iface="eth0" netns="/var/run/netns/cni-175dddd0-5b14-6218-9e3f-b126f5dfc065" Nov 8 00:26:43.831237 containerd[1986]: 2025-11-08 00:26:43.765 [INFO][5236] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" iface="eth0" netns="/var/run/netns/cni-175dddd0-5b14-6218-9e3f-b126f5dfc065" Nov 8 00:26:43.831237 containerd[1986]: 2025-11-08 00:26:43.765 [INFO][5236] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" Nov 8 00:26:43.831237 containerd[1986]: 2025-11-08 00:26:43.765 [INFO][5236] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" Nov 8 00:26:43.831237 containerd[1986]: 2025-11-08 00:26:43.808 [INFO][5253] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" HandleID="k8s-pod-network.179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" Workload="ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-eth0" Nov 8 00:26:43.831237 containerd[1986]: 2025-11-08 00:26:43.808 [INFO][5253] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:43.831237 containerd[1986]: 2025-11-08 00:26:43.808 [INFO][5253] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:43.831237 containerd[1986]: 2025-11-08 00:26:43.817 [WARNING][5253] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" HandleID="k8s-pod-network.179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" Workload="ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-eth0" Nov 8 00:26:43.831237 containerd[1986]: 2025-11-08 00:26:43.817 [INFO][5253] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" HandleID="k8s-pod-network.179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" Workload="ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-eth0" Nov 8 00:26:43.831237 containerd[1986]: 2025-11-08 00:26:43.825 [INFO][5253] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:43.831237 containerd[1986]: 2025-11-08 00:26:43.827 [INFO][5236] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" Nov 8 00:26:43.834320 containerd[1986]: time="2025-11-08T00:26:43.831417084Z" level=info msg="TearDown network for sandbox \"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1\" successfully" Nov 8 00:26:43.834320 containerd[1986]: time="2025-11-08T00:26:43.831442829Z" level=info msg="StopPodSandbox for \"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1\" returns successfully" Nov 8 00:26:43.834740 systemd[1]: run-netns-cni\x2d175dddd0\x2d5b14\x2d6218\x2d9e3f\x2db126f5dfc065.mount: Deactivated successfully. Nov 8 00:26:43.837498 containerd[1986]: time="2025-11-08T00:26:43.837192860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-756f78cd95-ppxpv,Uid:2854d816-9155-4f6f-a8ba-78872a67ac8c,Namespace:calico-system,Attempt:1,}" Nov 8 00:26:43.857427 containerd[1986]: 2025-11-08 00:26:43.781 [INFO][5237] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" Nov 8 00:26:43.857427 containerd[1986]: 2025-11-08 00:26:43.781 [INFO][5237] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" iface="eth0" netns="/var/run/netns/cni-2daddc60-12ab-f520-7d21-40b6d843a5a4" Nov 8 00:26:43.857427 containerd[1986]: 2025-11-08 00:26:43.781 [INFO][5237] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" iface="eth0" netns="/var/run/netns/cni-2daddc60-12ab-f520-7d21-40b6d843a5a4" Nov 8 00:26:43.857427 containerd[1986]: 2025-11-08 00:26:43.783 [INFO][5237] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" iface="eth0" netns="/var/run/netns/cni-2daddc60-12ab-f520-7d21-40b6d843a5a4" Nov 8 00:26:43.857427 containerd[1986]: 2025-11-08 00:26:43.783 [INFO][5237] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" Nov 8 00:26:43.857427 containerd[1986]: 2025-11-08 00:26:43.783 [INFO][5237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" Nov 8 00:26:43.857427 containerd[1986]: 2025-11-08 00:26:43.841 [INFO][5258] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" HandleID="k8s-pod-network.6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" Workload="ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-eth0" Nov 8 00:26:43.857427 containerd[1986]: 2025-11-08 00:26:43.842 [INFO][5258] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:43.857427 containerd[1986]: 2025-11-08 00:26:43.842 [INFO][5258] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:43.857427 containerd[1986]: 2025-11-08 00:26:43.849 [WARNING][5258] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" HandleID="k8s-pod-network.6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" Workload="ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-eth0" Nov 8 00:26:43.857427 containerd[1986]: 2025-11-08 00:26:43.849 [INFO][5258] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" HandleID="k8s-pod-network.6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" Workload="ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-eth0" Nov 8 00:26:43.857427 containerd[1986]: 2025-11-08 00:26:43.851 [INFO][5258] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:43.857427 containerd[1986]: 2025-11-08 00:26:43.854 [INFO][5237] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" Nov 8 00:26:43.860050 containerd[1986]: time="2025-11-08T00:26:43.858383507Z" level=info msg="TearDown network for sandbox \"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff\" successfully" Nov 8 00:26:43.860050 containerd[1986]: time="2025-11-08T00:26:43.858431940Z" level=info msg="StopPodSandbox for \"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff\" returns successfully" Nov 8 00:26:43.861920 systemd[1]: run-netns-cni\x2d2daddc60\x2d12ab\x2df520\x2d7d21\x2d40b6d843a5a4.mount: Deactivated successfully. Nov 8 00:26:43.867783 containerd[1986]: time="2025-11-08T00:26:43.866493513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-phbxx,Uid:9ea252f2-da76-4fe0-acdf-c4fc18ba31ac,Namespace:kube-system,Attempt:1,}" Nov 8 00:26:44.064976 systemd-networkd[1785]: calia7fc7e28218: Link UP Nov 8 00:26:44.067548 systemd-networkd[1785]: calia7fc7e28218: Gained carrier Nov 8 00:26:44.090272 containerd[1986]: 2025-11-08 00:26:43.956 [INFO][5267] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-eth0 calico-kube-controllers-756f78cd95- calico-system 2854d816-9155-4f6f-a8ba-78872a67ac8c 1039 0 2025-11-08 00:26:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:756f78cd95 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-25-121 calico-kube-controllers-756f78cd95-ppxpv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia7fc7e28218 [] [] }} ContainerID="09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c" Namespace="calico-system" Pod="calico-kube-controllers-756f78cd95-ppxpv" WorkloadEndpoint="ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-" Nov 8 00:26:44.090272 containerd[1986]: 2025-11-08 00:26:43.958 [INFO][5267] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c" Namespace="calico-system" Pod="calico-kube-controllers-756f78cd95-ppxpv" WorkloadEndpoint="ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-eth0" Nov 8 00:26:44.090272 containerd[1986]: 2025-11-08 00:26:44.004 [INFO][5290] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c" HandleID="k8s-pod-network.09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c" Workload="ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-eth0" Nov 8 00:26:44.090272 containerd[1986]: 2025-11-08 00:26:44.004 [INFO][5290] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c" HandleID="k8s-pod-network.09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c" Workload="ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5270), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-121", "pod":"calico-kube-controllers-756f78cd95-ppxpv", "timestamp":"2025-11-08 00:26:44.004391801 +0000 UTC"}, Hostname:"ip-172-31-25-121", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:44.090272 containerd[1986]: 2025-11-08 00:26:44.004 [INFO][5290] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:44.090272 containerd[1986]: 2025-11-08 00:26:44.004 [INFO][5290] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:44.090272 containerd[1986]: 2025-11-08 00:26:44.004 [INFO][5290] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-121' Nov 8 00:26:44.090272 containerd[1986]: 2025-11-08 00:26:44.017 [INFO][5290] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c" host="ip-172-31-25-121" Nov 8 00:26:44.090272 containerd[1986]: 2025-11-08 00:26:44.023 [INFO][5290] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-121" Nov 8 00:26:44.090272 containerd[1986]: 2025-11-08 00:26:44.030 [INFO][5290] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="ip-172-31-25-121" Nov 8 00:26:44.090272 containerd[1986]: 2025-11-08 00:26:44.032 [INFO][5290] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="ip-172-31-25-121" Nov 8 00:26:44.090272 containerd[1986]: 2025-11-08 00:26:44.035 [INFO][5290] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ip-172-31-25-121" Nov 8 00:26:44.090272 containerd[1986]: 2025-11-08 00:26:44.035 [INFO][5290] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c" host="ip-172-31-25-121" Nov 8 00:26:44.090272 containerd[1986]: 2025-11-08 00:26:44.037 [INFO][5290] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c Nov 8 00:26:44.090272 containerd[1986]: 2025-11-08 00:26:44.043 [INFO][5290] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c" host="ip-172-31-25-121" Nov 8 00:26:44.090272 containerd[1986]: 2025-11-08 00:26:44.052 [INFO][5290] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.68/26] block=192.168.65.64/26 handle="k8s-pod-network.09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c" host="ip-172-31-25-121" Nov 8 00:26:44.090272 containerd[1986]: 2025-11-08 00:26:44.052 [INFO][5290] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.68/26] handle="k8s-pod-network.09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c" host="ip-172-31-25-121" Nov 8 00:26:44.090272 containerd[1986]: 2025-11-08 00:26:44.052 [INFO][5290] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:44.090272 containerd[1986]: 2025-11-08 00:26:44.052 [INFO][5290] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.68/26] IPv6=[] ContainerID="09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c" HandleID="k8s-pod-network.09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c" Workload="ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-eth0" Nov 8 00:26:44.091268 containerd[1986]: 2025-11-08 00:26:44.059 [INFO][5267] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c" Namespace="calico-system" Pod="calico-kube-controllers-756f78cd95-ppxpv" WorkloadEndpoint="ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-eth0", GenerateName:"calico-kube-controllers-756f78cd95-", Namespace:"calico-system", SelfLink:"", UID:"2854d816-9155-4f6f-a8ba-78872a67ac8c", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"756f78cd95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"", Pod:"calico-kube-controllers-756f78cd95-ppxpv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia7fc7e28218", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:44.091268 containerd[1986]: 2025-11-08 00:26:44.059 [INFO][5267] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.68/32] ContainerID="09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c" Namespace="calico-system" Pod="calico-kube-controllers-756f78cd95-ppxpv" WorkloadEndpoint="ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-eth0" Nov 8 00:26:44.091268 containerd[1986]: 2025-11-08 00:26:44.059 [INFO][5267] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia7fc7e28218 ContainerID="09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c" Namespace="calico-system" Pod="calico-kube-controllers-756f78cd95-ppxpv" WorkloadEndpoint="ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-eth0" Nov 8 00:26:44.091268 containerd[1986]: 2025-11-08 00:26:44.069 [INFO][5267] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c" Namespace="calico-system" Pod="calico-kube-controllers-756f78cd95-ppxpv" WorkloadEndpoint="ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-eth0" Nov 8 00:26:44.091268 containerd[1986]: 2025-11-08 00:26:44.070 [INFO][5267] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c" Namespace="calico-system" Pod="calico-kube-controllers-756f78cd95-ppxpv" WorkloadEndpoint="ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-eth0", GenerateName:"calico-kube-controllers-756f78cd95-", Namespace:"calico-system", SelfLink:"", UID:"2854d816-9155-4f6f-a8ba-78872a67ac8c", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"756f78cd95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c", Pod:"calico-kube-controllers-756f78cd95-ppxpv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia7fc7e28218", MAC:"2e:26:1c:63:4b:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:44.091268 containerd[1986]: 2025-11-08 00:26:44.084 [INFO][5267] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c" Namespace="calico-system" Pod="calico-kube-controllers-756f78cd95-ppxpv" WorkloadEndpoint="ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-eth0" Nov 8 00:26:44.137179 containerd[1986]: time="2025-11-08T00:26:44.137016800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:44.137932 containerd[1986]: time="2025-11-08T00:26:44.137869774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:44.138191 containerd[1986]: time="2025-11-08T00:26:44.138079607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:44.138541 containerd[1986]: time="2025-11-08T00:26:44.138416653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:44.175494 systemd[1]: Started cri-containerd-09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c.scope - libcontainer container 09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c. Nov 8 00:26:44.197381 systemd-networkd[1785]: caliae301902e16: Link UP Nov 8 00:26:44.209195 systemd-networkd[1785]: caliae301902e16: Gained carrier Nov 8 00:26:44.261202 kubelet[3190]: E1108 00:26:44.260594 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qb5jn" podUID="59621f83-2f27-42e2-8c18-c119c79f6847" Nov 8 00:26:44.263006 containerd[1986]: 2025-11-08 00:26:43.979 [INFO][5280] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-eth0 coredns-66bc5c9577- kube-system 9ea252f2-da76-4fe0-acdf-c4fc18ba31ac 1040 0 2025-11-08 00:26:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-25-121 coredns-66bc5c9577-phbxx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliae301902e16 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e" Namespace="kube-system" Pod="coredns-66bc5c9577-phbxx" WorkloadEndpoint="ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-" Nov 8 00:26:44.263006 containerd[1986]: 2025-11-08 00:26:43.979 [INFO][5280] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e" Namespace="kube-system" Pod="coredns-66bc5c9577-phbxx" WorkloadEndpoint="ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-eth0" Nov 8 00:26:44.263006 containerd[1986]: 2025-11-08 00:26:44.027 [INFO][5295] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e" HandleID="k8s-pod-network.799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e" Workload="ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-eth0" Nov 8 00:26:44.263006 containerd[1986]: 2025-11-08 00:26:44.028 [INFO][5295] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e" HandleID="k8s-pod-network.799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e" Workload="ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332450), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-25-121", "pod":"coredns-66bc5c9577-phbxx", "timestamp":"2025-11-08 00:26:44.027002851 +0000 UTC"}, Hostname:"ip-172-31-25-121", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:44.263006 containerd[1986]: 2025-11-08 00:26:44.028 [INFO][5295] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:44.263006 containerd[1986]: 2025-11-08 00:26:44.052 [INFO][5295] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:44.263006 containerd[1986]: 2025-11-08 00:26:44.053 [INFO][5295] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-121' Nov 8 00:26:44.263006 containerd[1986]: 2025-11-08 00:26:44.122 [INFO][5295] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e" host="ip-172-31-25-121" Nov 8 00:26:44.263006 containerd[1986]: 2025-11-08 00:26:44.134 [INFO][5295] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-121" Nov 8 00:26:44.263006 containerd[1986]: 2025-11-08 00:26:44.146 [INFO][5295] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="ip-172-31-25-121" Nov 8 00:26:44.263006 containerd[1986]: 2025-11-08 00:26:44.158 [INFO][5295] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="ip-172-31-25-121" Nov 8 00:26:44.263006 containerd[1986]: 2025-11-08 00:26:44.165 [INFO][5295] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ip-172-31-25-121" Nov 8 00:26:44.263006 containerd[1986]: 2025-11-08 00:26:44.165 [INFO][5295] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e" host="ip-172-31-25-121" Nov 8 00:26:44.263006 containerd[1986]: 2025-11-08 00:26:44.169 [INFO][5295] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e Nov 8 00:26:44.263006 containerd[1986]: 2025-11-08 00:26:44.177 [INFO][5295] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e" host="ip-172-31-25-121" Nov 8 00:26:44.263006 containerd[1986]: 2025-11-08 00:26:44.186 [INFO][5295] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.69/26] block=192.168.65.64/26 handle="k8s-pod-network.799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e" host="ip-172-31-25-121" Nov 8 00:26:44.263006 containerd[1986]: 2025-11-08 00:26:44.187 [INFO][5295] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.69/26] handle="k8s-pod-network.799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e" host="ip-172-31-25-121" Nov 8 00:26:44.263006 containerd[1986]: 2025-11-08 00:26:44.187 [INFO][5295] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:44.263006 containerd[1986]: 2025-11-08 00:26:44.187 [INFO][5295] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.69/26] IPv6=[] ContainerID="799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e" HandleID="k8s-pod-network.799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e" Workload="ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-eth0" Nov 8 00:26:44.266221 containerd[1986]: 2025-11-08 00:26:44.191 [INFO][5280] cni-plugin/k8s.go 418: Populated endpoint ContainerID="799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e" Namespace="kube-system" Pod="coredns-66bc5c9577-phbxx" WorkloadEndpoint="ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"9ea252f2-da76-4fe0-acdf-c4fc18ba31ac", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"", Pod:"coredns-66bc5c9577-phbxx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliae301902e16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:44.266221 containerd[1986]: 2025-11-08 00:26:44.191 [INFO][5280] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.69/32] ContainerID="799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e" Namespace="kube-system" Pod="coredns-66bc5c9577-phbxx" WorkloadEndpoint="ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-eth0" Nov 8 00:26:44.266221 containerd[1986]: 2025-11-08 00:26:44.191 [INFO][5280] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae301902e16 ContainerID="799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e" Namespace="kube-system" Pod="coredns-66bc5c9577-phbxx" WorkloadEndpoint="ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-eth0" Nov 8 00:26:44.266221 containerd[1986]: 2025-11-08 00:26:44.204 [INFO][5280] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e" Namespace="kube-system" Pod="coredns-66bc5c9577-phbxx" WorkloadEndpoint="ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-eth0" Nov 8 00:26:44.266221 containerd[1986]: 2025-11-08 00:26:44.214 [INFO][5280] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e" Namespace="kube-system" Pod="coredns-66bc5c9577-phbxx" WorkloadEndpoint="ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"9ea252f2-da76-4fe0-acdf-c4fc18ba31ac", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e", Pod:"coredns-66bc5c9577-phbxx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliae301902e16", MAC:"f2:fe:d7:ed:57:e0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:44.266221 containerd[1986]: 2025-11-08 00:26:44.250 [INFO][5280] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e" Namespace="kube-system" Pod="coredns-66bc5c9577-phbxx" WorkloadEndpoint="ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-eth0" Nov 8 00:26:44.324515 containerd[1986]: time="2025-11-08T00:26:44.323693080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:44.324515 containerd[1986]: time="2025-11-08T00:26:44.323775653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:44.324515 containerd[1986]: time="2025-11-08T00:26:44.323823892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:44.327758 containerd[1986]: time="2025-11-08T00:26:44.323978497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:44.347639 containerd[1986]: time="2025-11-08T00:26:44.347533626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-756f78cd95-ppxpv,Uid:2854d816-9155-4f6f-a8ba-78872a67ac8c,Namespace:calico-system,Attempt:1,} returns sandbox id \"09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c\"" Nov 8 00:26:44.355407 containerd[1986]: time="2025-11-08T00:26:44.354792448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:26:44.399521 systemd[1]: Started cri-containerd-799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e.scope - libcontainer container 799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e. Nov 8 00:26:44.400924 systemd-networkd[1785]: calic2a0c1ada60: Gained IPv6LL Nov 8 00:26:44.518548 containerd[1986]: time="2025-11-08T00:26:44.518456090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-phbxx,Uid:9ea252f2-da76-4fe0-acdf-c4fc18ba31ac,Namespace:kube-system,Attempt:1,} returns sandbox id \"799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e\"" Nov 8 00:26:44.533839 containerd[1986]: time="2025-11-08T00:26:44.533784349Z" level=info msg="CreateContainer within sandbox \"799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:26:44.564933 containerd[1986]: time="2025-11-08T00:26:44.564871615Z" level=info msg="CreateContainer within sandbox \"799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8bb3f00e7bf8ceb865e094a8d7f7ff990f83b078a4fb75b5eda0ed488fce2161\"" Nov 8 00:26:44.565755 containerd[1986]: time="2025-11-08T00:26:44.565718797Z" level=info msg="StartContainer for \"8bb3f00e7bf8ceb865e094a8d7f7ff990f83b078a4fb75b5eda0ed488fce2161\"" Nov 8 00:26:44.609520 systemd[1]: Started cri-containerd-8bb3f00e7bf8ceb865e094a8d7f7ff990f83b078a4fb75b5eda0ed488fce2161.scope - libcontainer container 8bb3f00e7bf8ceb865e094a8d7f7ff990f83b078a4fb75b5eda0ed488fce2161. Nov 8 00:26:44.638584 containerd[1986]: time="2025-11-08T00:26:44.638352529Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:44.641364 containerd[1986]: time="2025-11-08T00:26:44.641000684Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:26:44.641364 containerd[1986]: time="2025-11-08T00:26:44.641106667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:26:44.641551 kubelet[3190]: E1108 00:26:44.641491 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:26:44.641928 kubelet[3190]: E1108 00:26:44.641543 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:26:44.641928 kubelet[3190]: E1108 00:26:44.641632 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-756f78cd95-ppxpv_calico-system(2854d816-9155-4f6f-a8ba-78872a67ac8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:44.641928 kubelet[3190]: E1108 00:26:44.641675 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-756f78cd95-ppxpv" podUID="2854d816-9155-4f6f-a8ba-78872a67ac8c" Nov 8 00:26:44.667771 containerd[1986]: time="2025-11-08T00:26:44.667722596Z" level=info msg="StartContainer for \"8bb3f00e7bf8ceb865e094a8d7f7ff990f83b078a4fb75b5eda0ed488fce2161\" returns successfully" Nov 8 00:26:44.686895 containerd[1986]: time="2025-11-08T00:26:44.686854566Z" level=info msg="StopPodSandbox for \"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17\"" Nov 8 00:26:44.690181 containerd[1986]: time="2025-11-08T00:26:44.689788193Z" level=info msg="StopPodSandbox for \"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa\"" Nov 8 00:26:44.699472 containerd[1986]: time="2025-11-08T00:26:44.699272388Z" level=info msg="StopPodSandbox for \"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224\"" Nov 8 00:26:44.967065 containerd[1986]: 2025-11-08 00:26:44.814 [INFO][5471] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" Nov 8 00:26:44.967065 containerd[1986]: 2025-11-08 00:26:44.814 [INFO][5471] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" iface="eth0" netns="/var/run/netns/cni-b70d594a-694d-efef-5c55-3db9d7b07c59" Nov 8 00:26:44.967065 containerd[1986]: 2025-11-08 00:26:44.815 [INFO][5471] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" iface="eth0" netns="/var/run/netns/cni-b70d594a-694d-efef-5c55-3db9d7b07c59" Nov 8 00:26:44.967065 containerd[1986]: 2025-11-08 00:26:44.818 [INFO][5471] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" iface="eth0" netns="/var/run/netns/cni-b70d594a-694d-efef-5c55-3db9d7b07c59" Nov 8 00:26:44.967065 containerd[1986]: 2025-11-08 00:26:44.818 [INFO][5471] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" Nov 8 00:26:44.967065 containerd[1986]: 2025-11-08 00:26:44.818 [INFO][5471] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" Nov 8 00:26:44.967065 containerd[1986]: 2025-11-08 00:26:44.910 [INFO][5490] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" HandleID="k8s-pod-network.3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" Workload="ip--172--31--25--121-k8s-csi--node--driver--hcwvd-eth0" Nov 8 00:26:44.967065 containerd[1986]: 2025-11-08 00:26:44.910 [INFO][5490] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:44.967065 containerd[1986]: 2025-11-08 00:26:44.910 [INFO][5490] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:44.967065 containerd[1986]: 2025-11-08 00:26:44.936 [WARNING][5490] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" HandleID="k8s-pod-network.3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" Workload="ip--172--31--25--121-k8s-csi--node--driver--hcwvd-eth0" Nov 8 00:26:44.967065 containerd[1986]: 2025-11-08 00:26:44.936 [INFO][5490] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" HandleID="k8s-pod-network.3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" Workload="ip--172--31--25--121-k8s-csi--node--driver--hcwvd-eth0" Nov 8 00:26:44.967065 containerd[1986]: 2025-11-08 00:26:44.944 [INFO][5490] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:44.967065 containerd[1986]: 2025-11-08 00:26:44.956 [INFO][5471] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" Nov 8 00:26:44.969829 containerd[1986]: time="2025-11-08T00:26:44.969693299Z" level=info msg="TearDown network for sandbox \"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa\" successfully" Nov 8 00:26:44.970975 containerd[1986]: time="2025-11-08T00:26:44.970352563Z" level=info msg="StopPodSandbox for \"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa\" returns successfully" Nov 8 00:26:44.975557 systemd-networkd[1785]: vxlan.calico: Gained IPv6LL Nov 8 00:26:44.979648 systemd[1]: run-netns-cni\x2db70d594a\x2d694d\x2defef\x2d5c55\x2d3db9d7b07c59.mount: Deactivated successfully. Nov 8 00:26:44.984256 containerd[1986]: time="2025-11-08T00:26:44.984210305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hcwvd,Uid:543aa209-599c-4d8e-9da3-550061520690,Namespace:calico-system,Attempt:1,}" Nov 8 00:26:45.085087 containerd[1986]: 2025-11-08 00:26:44.929 [INFO][5472] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" Nov 8 00:26:45.085087 containerd[1986]: 2025-11-08 00:26:44.929 [INFO][5472] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" iface="eth0" netns="/var/run/netns/cni-6b8322d5-8e67-8af3-9a28-0b97b638316a" Nov 8 00:26:45.085087 containerd[1986]: 2025-11-08 00:26:44.930 [INFO][5472] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" iface="eth0" netns="/var/run/netns/cni-6b8322d5-8e67-8af3-9a28-0b97b638316a" Nov 8 00:26:45.085087 containerd[1986]: 2025-11-08 00:26:44.930 [INFO][5472] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" iface="eth0" netns="/var/run/netns/cni-6b8322d5-8e67-8af3-9a28-0b97b638316a" Nov 8 00:26:45.085087 containerd[1986]: 2025-11-08 00:26:44.930 [INFO][5472] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" Nov 8 00:26:45.085087 containerd[1986]: 2025-11-08 00:26:44.930 [INFO][5472] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" Nov 8 00:26:45.085087 containerd[1986]: 2025-11-08 00:26:45.042 [INFO][5501] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" HandleID="k8s-pod-network.dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" Workload="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-eth0" Nov 8 00:26:45.085087 containerd[1986]: 2025-11-08 00:26:45.046 [INFO][5501] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:45.085087 containerd[1986]: 2025-11-08 00:26:45.049 [INFO][5501] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:45.085087 containerd[1986]: 2025-11-08 00:26:45.068 [WARNING][5501] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" HandleID="k8s-pod-network.dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" Workload="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-eth0" Nov 8 00:26:45.085087 containerd[1986]: 2025-11-08 00:26:45.069 [INFO][5501] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" HandleID="k8s-pod-network.dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" Workload="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-eth0" Nov 8 00:26:45.085087 containerd[1986]: 2025-11-08 00:26:45.072 [INFO][5501] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:45.085087 containerd[1986]: 2025-11-08 00:26:45.078 [INFO][5472] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" Nov 8 00:26:45.089376 containerd[1986]: time="2025-11-08T00:26:45.089041278Z" level=info msg="TearDown network for sandbox \"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224\" successfully" Nov 8 00:26:45.089376 containerd[1986]: time="2025-11-08T00:26:45.089103221Z" level=info msg="StopPodSandbox for \"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224\" returns successfully" Nov 8 00:26:45.099889 containerd[1986]: time="2025-11-08T00:26:45.098414409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d84f7c9c6-th4rp,Uid:7517a6de-bfae-458e-a17f-83662a231d90,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:26:45.098917 systemd[1]: run-netns-cni\x2d6b8322d5\x2d8e67\x2d8af3\x2d9a28\x2d0b97b638316a.mount: Deactivated successfully. Nov 8 00:26:45.117672 containerd[1986]: 2025-11-08 00:26:44.944 [INFO][5473] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" Nov 8 00:26:45.117672 containerd[1986]: 2025-11-08 00:26:44.945 [INFO][5473] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" iface="eth0" netns="/var/run/netns/cni-49613e65-00bc-9027-b342-e97f73fc053c" Nov 8 00:26:45.117672 containerd[1986]: 2025-11-08 00:26:44.946 [INFO][5473] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" iface="eth0" netns="/var/run/netns/cni-49613e65-00bc-9027-b342-e97f73fc053c" Nov 8 00:26:45.117672 containerd[1986]: 2025-11-08 00:26:44.946 [INFO][5473] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" iface="eth0" netns="/var/run/netns/cni-49613e65-00bc-9027-b342-e97f73fc053c" Nov 8 00:26:45.117672 containerd[1986]: 2025-11-08 00:26:44.947 [INFO][5473] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" Nov 8 00:26:45.117672 containerd[1986]: 2025-11-08 00:26:44.947 [INFO][5473] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" Nov 8 00:26:45.117672 containerd[1986]: 2025-11-08 00:26:45.057 [INFO][5506] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" HandleID="k8s-pod-network.208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" Workload="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-eth0" Nov 8 00:26:45.117672 containerd[1986]: 2025-11-08 00:26:45.059 [INFO][5506] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:45.117672 containerd[1986]: 2025-11-08 00:26:45.072 [INFO][5506] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:45.117672 containerd[1986]: 2025-11-08 00:26:45.094 [WARNING][5506] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" HandleID="k8s-pod-network.208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" Workload="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-eth0" Nov 8 00:26:45.117672 containerd[1986]: 2025-11-08 00:26:45.094 [INFO][5506] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" HandleID="k8s-pod-network.208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" Workload="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-eth0" Nov 8 00:26:45.117672 containerd[1986]: 2025-11-08 00:26:45.103 [INFO][5506] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:45.117672 containerd[1986]: 2025-11-08 00:26:45.112 [INFO][5473] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" Nov 8 00:26:45.119888 containerd[1986]: time="2025-11-08T00:26:45.119094515Z" level=info msg="TearDown network for sandbox \"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17\" successfully" Nov 8 00:26:45.119888 containerd[1986]: time="2025-11-08T00:26:45.119130762Z" level=info msg="StopPodSandbox for \"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17\" returns successfully" Nov 8 00:26:45.128473 containerd[1986]: time="2025-11-08T00:26:45.128332113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d84f7c9c6-r5rdl,Uid:36acaf38-ef21-4c55-a6b7-ba0516894e6c,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:26:45.277456 kubelet[3190]: E1108 00:26:45.277107 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-756f78cd95-ppxpv" podUID="2854d816-9155-4f6f-a8ba-78872a67ac8c" Nov 8 00:26:45.488390 systemd-networkd[1785]: caliae301902e16: Gained IPv6LL Nov 8 00:26:45.522175 systemd-networkd[1785]: cali3ba53049bac: Link UP Nov 8 00:26:45.523256 systemd-networkd[1785]: cali3ba53049bac: Gained carrier Nov 8 00:26:45.565668 kubelet[3190]: I1108 00:26:45.565520 3190 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-phbxx" podStartSLOduration=45.565494807 podStartE2EDuration="45.565494807s" podCreationTimestamp="2025-11-08 00:26:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:26:45.446509901 +0000 UTC m=+51.929404293" watchObservedRunningTime="2025-11-08 00:26:45.565494807 +0000 UTC m=+52.048389197" Nov 8 00:26:45.573538 containerd[1986]: 2025-11-08 00:26:45.129 [INFO][5512] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--121-k8s-csi--node--driver--hcwvd-eth0 csi-node-driver- calico-system 543aa209-599c-4d8e-9da3-550061520690 1071 0 2025-11-08 00:26:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-25-121 csi-node-driver-hcwvd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3ba53049bac [] [] }} ContainerID="7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba" Namespace="calico-system" Pod="csi-node-driver-hcwvd" WorkloadEndpoint="ip--172--31--25--121-k8s-csi--node--driver--hcwvd-" Nov 8 00:26:45.573538 containerd[1986]: 2025-11-08 00:26:45.129 [INFO][5512] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba" Namespace="calico-system" Pod="csi-node-driver-hcwvd" WorkloadEndpoint="ip--172--31--25--121-k8s-csi--node--driver--hcwvd-eth0" Nov 8 00:26:45.573538 containerd[1986]: 2025-11-08 00:26:45.262 [INFO][5526] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba" HandleID="k8s-pod-network.7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba" Workload="ip--172--31--25--121-k8s-csi--node--driver--hcwvd-eth0" Nov 8 00:26:45.573538 containerd[1986]: 2025-11-08 00:26:45.262 [INFO][5526] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba" HandleID="k8s-pod-network.7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba" Workload="ip--172--31--25--121-k8s-csi--node--driver--hcwvd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319dd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-121", "pod":"csi-node-driver-hcwvd", "timestamp":"2025-11-08 00:26:45.262495385 +0000 UTC"}, Hostname:"ip-172-31-25-121", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:45.573538 containerd[1986]: 2025-11-08 00:26:45.262 [INFO][5526] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:45.573538 containerd[1986]: 2025-11-08 00:26:45.262 [INFO][5526] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:45.573538 containerd[1986]: 2025-11-08 00:26:45.262 [INFO][5526] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-121' Nov 8 00:26:45.573538 containerd[1986]: 2025-11-08 00:26:45.295 [INFO][5526] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba" host="ip-172-31-25-121" Nov 8 00:26:45.573538 containerd[1986]: 2025-11-08 00:26:45.356 [INFO][5526] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-121" Nov 8 00:26:45.573538 containerd[1986]: 2025-11-08 00:26:45.403 [INFO][5526] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="ip-172-31-25-121" Nov 8 00:26:45.573538 containerd[1986]: 2025-11-08 00:26:45.424 [INFO][5526] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="ip-172-31-25-121" Nov 8 00:26:45.573538 containerd[1986]: 2025-11-08 00:26:45.457 [INFO][5526] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ip-172-31-25-121" Nov 8 00:26:45.573538 containerd[1986]: 2025-11-08 00:26:45.459 [INFO][5526] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba" host="ip-172-31-25-121" Nov 8 00:26:45.573538 containerd[1986]: 2025-11-08 00:26:45.464 [INFO][5526] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba Nov 8 00:26:45.573538 containerd[1986]: 2025-11-08 00:26:45.486 [INFO][5526] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba" host="ip-172-31-25-121" Nov 8 00:26:45.573538 containerd[1986]: 2025-11-08 00:26:45.508 [INFO][5526] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.70/26] block=192.168.65.64/26 handle="k8s-pod-network.7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba" host="ip-172-31-25-121" Nov 8 00:26:45.573538 containerd[1986]: 2025-11-08 00:26:45.508 [INFO][5526] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.70/26] handle="k8s-pod-network.7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba" host="ip-172-31-25-121" Nov 8 00:26:45.573538 containerd[1986]: 2025-11-08 00:26:45.508 [INFO][5526] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:45.573538 containerd[1986]: 2025-11-08 00:26:45.508 [INFO][5526] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.70/26] IPv6=[] ContainerID="7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba" HandleID="k8s-pod-network.7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba" Workload="ip--172--31--25--121-k8s-csi--node--driver--hcwvd-eth0" Nov 8 00:26:45.576982 containerd[1986]: 2025-11-08 00:26:45.512 [INFO][5512] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba" Namespace="calico-system" Pod="csi-node-driver-hcwvd" WorkloadEndpoint="ip--172--31--25--121-k8s-csi--node--driver--hcwvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-csi--node--driver--hcwvd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"543aa209-599c-4d8e-9da3-550061520690", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"", Pod:"csi-node-driver-hcwvd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3ba53049bac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:45.576982 containerd[1986]: 2025-11-08 00:26:45.513 [INFO][5512] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.70/32] ContainerID="7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba" Namespace="calico-system" Pod="csi-node-driver-hcwvd" WorkloadEndpoint="ip--172--31--25--121-k8s-csi--node--driver--hcwvd-eth0" Nov 8 00:26:45.576982 containerd[1986]: 2025-11-08 00:26:45.513 [INFO][5512] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3ba53049bac ContainerID="7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba" Namespace="calico-system" Pod="csi-node-driver-hcwvd" WorkloadEndpoint="ip--172--31--25--121-k8s-csi--node--driver--hcwvd-eth0" Nov 8 00:26:45.576982 containerd[1986]: 2025-11-08 00:26:45.522 [INFO][5512] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba" Namespace="calico-system" Pod="csi-node-driver-hcwvd" WorkloadEndpoint="ip--172--31--25--121-k8s-csi--node--driver--hcwvd-eth0" Nov 8 00:26:45.576982 containerd[1986]: 2025-11-08 00:26:45.527 [INFO][5512] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba" Namespace="calico-system" Pod="csi-node-driver-hcwvd" WorkloadEndpoint="ip--172--31--25--121-k8s-csi--node--driver--hcwvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-csi--node--driver--hcwvd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"543aa209-599c-4d8e-9da3-550061520690", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba", Pod:"csi-node-driver-hcwvd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3ba53049bac", MAC:"d6:ea:8a:1a:d6:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:45.576982 containerd[1986]: 2025-11-08 00:26:45.564 [INFO][5512] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba" Namespace="calico-system" Pod="csi-node-driver-hcwvd" WorkloadEndpoint="ip--172--31--25--121-k8s-csi--node--driver--hcwvd-eth0" Nov 8 00:26:45.644382 containerd[1986]: time="2025-11-08T00:26:45.638744569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:45.644382 containerd[1986]: time="2025-11-08T00:26:45.638847721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:45.644382 containerd[1986]: time="2025-11-08T00:26:45.638866081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:45.644382 containerd[1986]: time="2025-11-08T00:26:45.639215484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:45.654910 systemd[1]: Started sshd@10-172.31.25.121:22-139.178.89.65:58096.service - OpenSSH per-connection server daemon (139.178.89.65:58096). Nov 8 00:26:45.718553 systemd[1]: Started cri-containerd-7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba.scope - libcontainer container 7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba. Nov 8 00:26:45.756313 systemd-networkd[1785]: calib3014fe1dd0: Link UP Nov 8 00:26:45.759963 systemd-networkd[1785]: calib3014fe1dd0: Gained carrier Nov 8 00:26:45.799993 containerd[1986]: 2025-11-08 00:26:45.254 [INFO][5541] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-eth0 calico-apiserver-5d84f7c9c6- calico-apiserver 36acaf38-ef21-4c55-a6b7-ba0516894e6c 1075 0 2025-11-08 00:26:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d84f7c9c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-25-121 calico-apiserver-5d84f7c9c6-r5rdl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib3014fe1dd0 [] [] }} ContainerID="f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f7c9c6-r5rdl" WorkloadEndpoint="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-" Nov 8 00:26:45.799993 containerd[1986]: 2025-11-08 00:26:45.255 [INFO][5541] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f7c9c6-r5rdl" WorkloadEndpoint="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-eth0" Nov 8 00:26:45.799993 containerd[1986]: 2025-11-08 00:26:45.389 [INFO][5557] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac" HandleID="k8s-pod-network.f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac" Workload="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-eth0" Nov 8 00:26:45.799993 containerd[1986]: 2025-11-08 00:26:45.390 [INFO][5557] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac" HandleID="k8s-pod-network.f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac" Workload="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f990), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-25-121", "pod":"calico-apiserver-5d84f7c9c6-r5rdl", "timestamp":"2025-11-08 00:26:45.389480263 +0000 UTC"}, Hostname:"ip-172-31-25-121", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:45.799993 containerd[1986]: 2025-11-08 00:26:45.390 [INFO][5557] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:45.799993 containerd[1986]: 2025-11-08 00:26:45.508 [INFO][5557] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:45.799993 containerd[1986]: 2025-11-08 00:26:45.508 [INFO][5557] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-121' Nov 8 00:26:45.799993 containerd[1986]: 2025-11-08 00:26:45.547 [INFO][5557] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac" host="ip-172-31-25-121" Nov 8 00:26:45.799993 containerd[1986]: 2025-11-08 00:26:45.580 [INFO][5557] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-121" Nov 8 00:26:45.799993 containerd[1986]: 2025-11-08 00:26:45.596 [INFO][5557] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="ip-172-31-25-121" Nov 8 00:26:45.799993 containerd[1986]: 2025-11-08 00:26:45.600 [INFO][5557] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="ip-172-31-25-121" Nov 8 00:26:45.799993 containerd[1986]: 2025-11-08 00:26:45.606 [INFO][5557] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ip-172-31-25-121" Nov 8 00:26:45.799993 containerd[1986]: 2025-11-08 00:26:45.606 [INFO][5557] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac" host="ip-172-31-25-121" Nov 8 00:26:45.799993 containerd[1986]: 2025-11-08 00:26:45.611 [INFO][5557] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac Nov 8 00:26:45.799993 containerd[1986]: 2025-11-08 00:26:45.649 [INFO][5557] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac" host="ip-172-31-25-121" Nov 8 00:26:45.799993 containerd[1986]: 2025-11-08 00:26:45.731 [INFO][5557] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.71/26] block=192.168.65.64/26 handle="k8s-pod-network.f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac" host="ip-172-31-25-121" Nov 8 00:26:45.799993 containerd[1986]: 2025-11-08 00:26:45.732 [INFO][5557] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.71/26] handle="k8s-pod-network.f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac" host="ip-172-31-25-121" Nov 8 00:26:45.799993 containerd[1986]: 2025-11-08 00:26:45.733 [INFO][5557] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:45.799993 containerd[1986]: 2025-11-08 00:26:45.733 [INFO][5557] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.71/26] IPv6=[] ContainerID="f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac" HandleID="k8s-pod-network.f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac" Workload="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-eth0" Nov 8 00:26:45.801271 containerd[1986]: 2025-11-08 00:26:45.746 [INFO][5541] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f7c9c6-r5rdl" WorkloadEndpoint="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-eth0", GenerateName:"calico-apiserver-5d84f7c9c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"36acaf38-ef21-4c55-a6b7-ba0516894e6c", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d84f7c9c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"", Pod:"calico-apiserver-5d84f7c9c6-r5rdl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib3014fe1dd0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:45.801271 containerd[1986]: 2025-11-08 00:26:45.746 [INFO][5541] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.71/32] ContainerID="f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f7c9c6-r5rdl" WorkloadEndpoint="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-eth0" Nov 8 00:26:45.801271 containerd[1986]: 2025-11-08 00:26:45.746 [INFO][5541] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3014fe1dd0 ContainerID="f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f7c9c6-r5rdl" WorkloadEndpoint="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-eth0" Nov 8 00:26:45.801271 containerd[1986]: 2025-11-08 00:26:45.762 [INFO][5541] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f7c9c6-r5rdl" WorkloadEndpoint="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-eth0" Nov 8 00:26:45.801271 containerd[1986]: 2025-11-08 00:26:45.762 [INFO][5541] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f7c9c6-r5rdl" WorkloadEndpoint="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-eth0", GenerateName:"calico-apiserver-5d84f7c9c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"36acaf38-ef21-4c55-a6b7-ba0516894e6c", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d84f7c9c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac", Pod:"calico-apiserver-5d84f7c9c6-r5rdl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib3014fe1dd0", MAC:"1e:b8:f1:01:dc:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:45.801271 containerd[1986]: 2025-11-08 00:26:45.794 [INFO][5541] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f7c9c6-r5rdl" WorkloadEndpoint="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-eth0" Nov 8 00:26:45.862030 containerd[1986]: time="2025-11-08T00:26:45.861693057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:45.862030 containerd[1986]: time="2025-11-08T00:26:45.861779168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:45.862030 containerd[1986]: time="2025-11-08T00:26:45.861794717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:45.862030 containerd[1986]: time="2025-11-08T00:26:45.861895045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:45.909214 systemd[1]: run-netns-cni\x2d49613e65\x2d00bc\x2d9027\x2db342\x2de97f73fc053c.mount: Deactivated successfully. Nov 8 00:26:45.909843 sshd[5600]: Accepted publickey for core from 139.178.89.65 port 58096 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:26:45.917492 sshd[5600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:45.944738 systemd[1]: run-containerd-runc-k8s.io-f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac-runc.88djqJ.mount: Deactivated successfully. Nov 8 00:26:45.954596 systemd-networkd[1785]: cali90fa7b73f43: Link UP Nov 8 00:26:45.956619 systemd[1]: Started cri-containerd-f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac.scope - libcontainer container f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac. Nov 8 00:26:45.960564 systemd-networkd[1785]: cali90fa7b73f43: Gained carrier Nov 8 00:26:45.964358 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:26:45.966113 systemd-logind[1962]: New session 11 of user core. Nov 8 00:26:45.990212 containerd[1986]: time="2025-11-08T00:26:45.987767239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hcwvd,Uid:543aa209-599c-4d8e-9da3-550061520690,Namespace:calico-system,Attempt:1,} returns sandbox id \"7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba\"" Nov 8 00:26:45.998223 containerd[1986]: time="2025-11-08T00:26:45.997681035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:26:45.999721 systemd-networkd[1785]: calia7fc7e28218: Gained IPv6LL Nov 8 00:26:46.021890 containerd[1986]: 2025-11-08 00:26:45.362 [INFO][5527] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-eth0 calico-apiserver-5d84f7c9c6- calico-apiserver 7517a6de-bfae-458e-a17f-83662a231d90 1073 0 2025-11-08 00:26:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d84f7c9c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-25-121 calico-apiserver-5d84f7c9c6-th4rp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali90fa7b73f43 [] [] }} ContainerID="997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f7c9c6-th4rp" WorkloadEndpoint="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-" Nov 8 00:26:46.021890 containerd[1986]: 2025-11-08 00:26:45.363 [INFO][5527] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f7c9c6-th4rp" WorkloadEndpoint="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-eth0" Nov 8 00:26:46.021890 containerd[1986]: 2025-11-08 00:26:45.492 [INFO][5568] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59" HandleID="k8s-pod-network.997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59" Workload="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-eth0" Nov 8 00:26:46.021890 containerd[1986]: 2025-11-08 00:26:45.496 [INFO][5568] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59" HandleID="k8s-pod-network.997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59" Workload="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-25-121", "pod":"calico-apiserver-5d84f7c9c6-th4rp", "timestamp":"2025-11-08 00:26:45.49275493 +0000 UTC"}, Hostname:"ip-172-31-25-121", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:46.021890 containerd[1986]: 2025-11-08 00:26:45.496 [INFO][5568] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:46.021890 containerd[1986]: 2025-11-08 00:26:45.733 [INFO][5568] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:46.021890 containerd[1986]: 2025-11-08 00:26:45.736 [INFO][5568] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-121' Nov 8 00:26:46.021890 containerd[1986]: 2025-11-08 00:26:45.786 [INFO][5568] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59" host="ip-172-31-25-121" Nov 8 00:26:46.021890 containerd[1986]: 2025-11-08 00:26:45.804 [INFO][5568] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-121" Nov 8 00:26:46.021890 containerd[1986]: 2025-11-08 00:26:45.821 [INFO][5568] ipam/ipam.go 511: Trying affinity for 192.168.65.64/26 host="ip-172-31-25-121" Nov 8 00:26:46.021890 containerd[1986]: 2025-11-08 00:26:45.832 [INFO][5568] ipam/ipam.go 158: Attempting to load block cidr=192.168.65.64/26 host="ip-172-31-25-121" Nov 8 00:26:46.021890 containerd[1986]: 2025-11-08 00:26:45.837 [INFO][5568] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ip-172-31-25-121" Nov 8 00:26:46.021890 containerd[1986]: 2025-11-08 00:26:45.837 [INFO][5568] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59" host="ip-172-31-25-121" Nov 8 00:26:46.021890 containerd[1986]: 2025-11-08 00:26:45.842 [INFO][5568] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59 Nov 8 00:26:46.021890 containerd[1986]: 2025-11-08 00:26:45.857 [INFO][5568] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59" host="ip-172-31-25-121" Nov 8 00:26:46.021890 containerd[1986]: 2025-11-08 00:26:45.907 [INFO][5568] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.65.72/26] block=192.168.65.64/26 handle="k8s-pod-network.997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59" host="ip-172-31-25-121" Nov 8 00:26:46.021890 containerd[1986]: 2025-11-08 00:26:45.907 [INFO][5568] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.65.72/26] handle="k8s-pod-network.997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59" host="ip-172-31-25-121" Nov 8 00:26:46.021890 containerd[1986]: 2025-11-08 00:26:45.907 [INFO][5568] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:46.021890 containerd[1986]: 2025-11-08 00:26:45.907 [INFO][5568] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.65.72/26] IPv6=[] ContainerID="997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59" HandleID="k8s-pod-network.997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59" Workload="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-eth0" Nov 8 00:26:46.025196 containerd[1986]: 2025-11-08 00:26:45.935 [INFO][5527] cni-plugin/k8s.go 418: Populated endpoint ContainerID="997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f7c9c6-th4rp" WorkloadEndpoint="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-eth0", GenerateName:"calico-apiserver-5d84f7c9c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"7517a6de-bfae-458e-a17f-83662a231d90", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d84f7c9c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"", Pod:"calico-apiserver-5d84f7c9c6-th4rp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90fa7b73f43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:46.025196 containerd[1986]: 2025-11-08 00:26:45.935 [INFO][5527] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.72/32] ContainerID="997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f7c9c6-th4rp" WorkloadEndpoint="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-eth0" Nov 8 00:26:46.025196 containerd[1986]: 2025-11-08 00:26:45.935 [INFO][5527] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90fa7b73f43 ContainerID="997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f7c9c6-th4rp" WorkloadEndpoint="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-eth0" Nov 8 00:26:46.025196 containerd[1986]: 2025-11-08 00:26:45.969 [INFO][5527] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f7c9c6-th4rp" WorkloadEndpoint="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-eth0" Nov 8 00:26:46.025196 containerd[1986]: 2025-11-08 00:26:45.985 [INFO][5527] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f7c9c6-th4rp" WorkloadEndpoint="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-eth0", GenerateName:"calico-apiserver-5d84f7c9c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"7517a6de-bfae-458e-a17f-83662a231d90", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d84f7c9c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59", Pod:"calico-apiserver-5d84f7c9c6-th4rp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90fa7b73f43", MAC:"32:dc:14:c6:b3:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:46.025196 containerd[1986]: 2025-11-08 00:26:46.015 [INFO][5527] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59" Namespace="calico-apiserver" Pod="calico-apiserver-5d84f7c9c6-th4rp" WorkloadEndpoint="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-eth0" Nov 8 00:26:46.074132 containerd[1986]: time="2025-11-08T00:26:46.070910727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:46.074132 containerd[1986]: time="2025-11-08T00:26:46.071006308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:46.074132 containerd[1986]: time="2025-11-08T00:26:46.071022142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:46.074132 containerd[1986]: time="2025-11-08T00:26:46.071109467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:46.113647 containerd[1986]: time="2025-11-08T00:26:46.113330735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d84f7c9c6-r5rdl,Uid:36acaf38-ef21-4c55-a6b7-ba0516894e6c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac\"" Nov 8 00:26:46.141862 systemd[1]: Started cri-containerd-997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59.scope - libcontainer container 997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59. Nov 8 00:26:46.217444 containerd[1986]: time="2025-11-08T00:26:46.217277480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d84f7c9c6-th4rp,Uid:7517a6de-bfae-458e-a17f-83662a231d90,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59\"" Nov 8 00:26:46.283842 containerd[1986]: time="2025-11-08T00:26:46.283775606Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:46.286619 containerd[1986]: time="2025-11-08T00:26:46.285879581Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:26:46.286619 containerd[1986]: time="2025-11-08T00:26:46.285997376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:26:46.286838 kubelet[3190]: E1108 00:26:46.286323 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:26:46.286838 kubelet[3190]: E1108 00:26:46.286376 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:26:46.286838 kubelet[3190]: E1108 00:26:46.286531 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-hcwvd_calico-system(543aa209-599c-4d8e-9da3-550061520690): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:46.288218 containerd[1986]: time="2025-11-08T00:26:46.287934055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:26:46.325995 kubelet[3190]: E1108 00:26:46.325647 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-756f78cd95-ppxpv" podUID="2854d816-9155-4f6f-a8ba-78872a67ac8c" Nov 8 00:26:46.594806 containerd[1986]: time="2025-11-08T00:26:46.594752535Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:46.596882 containerd[1986]: time="2025-11-08T00:26:46.596825368Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:26:46.596986 containerd[1986]: time="2025-11-08T00:26:46.596922291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:26:46.598311 kubelet[3190]: E1108 00:26:46.598238 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:46.598415 kubelet[3190]: E1108 00:26:46.598313 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:46.598789 kubelet[3190]: E1108 00:26:46.598752 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d84f7c9c6-r5rdl_calico-apiserver(36acaf38-ef21-4c55-a6b7-ba0516894e6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:46.598923 kubelet[3190]: E1108 00:26:46.598796 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-r5rdl" podUID="36acaf38-ef21-4c55-a6b7-ba0516894e6c" Nov 8 00:26:46.601543 containerd[1986]: time="2025-11-08T00:26:46.601506175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:26:46.646113 sshd[5600]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:46.650303 systemd[1]: sshd@10-172.31.25.121:22-139.178.89.65:58096.service: Deactivated successfully. Nov 8 00:26:46.653016 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:26:46.655203 systemd-logind[1962]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:26:46.656756 systemd-logind[1962]: Removed session 11. Nov 8 00:26:46.832400 systemd-networkd[1785]: cali3ba53049bac: Gained IPv6LL Nov 8 00:26:46.914956 containerd[1986]: time="2025-11-08T00:26:46.914835069Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:46.921756 containerd[1986]: time="2025-11-08T00:26:46.921596659Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:26:46.921756 containerd[1986]: time="2025-11-08T00:26:46.921691024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:26:46.923678 kubelet[3190]: E1108 00:26:46.923432 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:46.923678 kubelet[3190]: E1108 00:26:46.923492 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:46.923852 kubelet[3190]: E1108 00:26:46.923696 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d84f7c9c6-th4rp_calico-apiserver(7517a6de-bfae-458e-a17f-83662a231d90): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:46.923852 kubelet[3190]: E1108 00:26:46.923743 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-th4rp" podUID="7517a6de-bfae-458e-a17f-83662a231d90" Nov 8 00:26:46.924672 containerd[1986]: time="2025-11-08T00:26:46.924637490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:26:47.215886 containerd[1986]: time="2025-11-08T00:26:47.215732602Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:47.218027 containerd[1986]: time="2025-11-08T00:26:47.217969525Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:26:47.218199 containerd[1986]: time="2025-11-08T00:26:47.218068840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:26:47.218346 kubelet[3190]: E1108 00:26:47.218307 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:26:47.218431 kubelet[3190]: E1108 00:26:47.218357 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:26:47.218479 kubelet[3190]: E1108 00:26:47.218453 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-hcwvd_calico-system(543aa209-599c-4d8e-9da3-550061520690): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:47.218565 kubelet[3190]: E1108 00:26:47.218513 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hcwvd" podUID="543aa209-599c-4d8e-9da3-550061520690" Nov 8 00:26:47.330066 kubelet[3190]: E1108 00:26:47.330014 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-r5rdl" podUID="36acaf38-ef21-4c55-a6b7-ba0516894e6c" Nov 8 00:26:47.331273 kubelet[3190]: E1108 00:26:47.330712 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-th4rp" podUID="7517a6de-bfae-458e-a17f-83662a231d90" Nov 8 00:26:47.331779 kubelet[3190]: E1108 00:26:47.331577 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hcwvd" podUID="543aa209-599c-4d8e-9da3-550061520690" Nov 8 00:26:47.471552 systemd-networkd[1785]: cali90fa7b73f43: Gained IPv6LL Nov 8 00:26:47.535547 systemd-networkd[1785]: calib3014fe1dd0: Gained IPv6LL Nov 8 00:26:50.210144 ntpd[1953]: Listen normally on 8 vxlan.calico 192.168.65.64:123 Nov 8 00:26:50.210239 ntpd[1953]: Listen normally on 9 cali1f3ff42d1dd [fe80::ecee:eeff:feee:eeee%4]:123 Nov 8 00:26:50.211545 ntpd[1953]: 8 Nov 00:26:50 ntpd[1953]: Listen normally on 8 vxlan.calico 192.168.65.64:123 Nov 8 00:26:50.211545 ntpd[1953]: 8 Nov 00:26:50 ntpd[1953]: Listen normally on 9 cali1f3ff42d1dd [fe80::ecee:eeff:feee:eeee%4]:123 Nov 8 00:26:50.211545 ntpd[1953]: 8 Nov 00:26:50 ntpd[1953]: Listen normally on 10 cali3c1f96d2867 [fe80::ecee:eeff:feee:eeee%5]:123 Nov 8 00:26:50.211545 ntpd[1953]: 8 Nov 00:26:50 ntpd[1953]: Listen normally on 11 calic2a0c1ada60 [fe80::ecee:eeff:feee:eeee%6]:123 Nov 8 00:26:50.211545 ntpd[1953]: 8 Nov 00:26:50 ntpd[1953]: Listen normally on 12 vxlan.calico [fe80::6479:ddff:fea1:9dbd%7]:123 Nov 8 00:26:50.211545 ntpd[1953]: 8 Nov 00:26:50 ntpd[1953]: Listen normally on 13 calia7fc7e28218 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 8 00:26:50.211545 ntpd[1953]: 8 Nov 00:26:50 ntpd[1953]: Listen normally on 14 caliae301902e16 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 8 00:26:50.211545 ntpd[1953]: 8 Nov 00:26:50 ntpd[1953]: Listen normally on 15 cali3ba53049bac [fe80::ecee:eeff:feee:eeee%12]:123 Nov 8 00:26:50.211545 ntpd[1953]: 8 Nov 00:26:50 ntpd[1953]: Listen normally on 16 calib3014fe1dd0 [fe80::ecee:eeff:feee:eeee%13]:123 Nov 8 00:26:50.211545 ntpd[1953]: 8 Nov 00:26:50 ntpd[1953]: Listen normally on 17 cali90fa7b73f43 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 8 00:26:50.210319 ntpd[1953]: Listen normally on 10 cali3c1f96d2867 [fe80::ecee:eeff:feee:eeee%5]:123 Nov 8 00:26:50.210350 ntpd[1953]: Listen normally on 11 calic2a0c1ada60 [fe80::ecee:eeff:feee:eeee%6]:123 Nov 8 00:26:50.210378 ntpd[1953]: Listen normally on 12 vxlan.calico [fe80::6479:ddff:fea1:9dbd%7]:123 Nov 8 00:26:50.210408 ntpd[1953]: Listen normally on 13 calia7fc7e28218 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 8 00:26:50.210435 ntpd[1953]: Listen normally on 14 caliae301902e16 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 8 00:26:50.210466 ntpd[1953]: Listen normally on 15 cali3ba53049bac [fe80::ecee:eeff:feee:eeee%12]:123 Nov 8 00:26:50.210494 ntpd[1953]: Listen normally on 16 calib3014fe1dd0 [fe80::ecee:eeff:feee:eeee%13]:123 Nov 8 00:26:50.210527 ntpd[1953]: Listen normally on 17 cali90fa7b73f43 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 8 00:26:51.687593 systemd[1]: Started sshd@11-172.31.25.121:22-139.178.89.65:48480.service - OpenSSH per-connection server daemon (139.178.89.65:48480). Nov 8 00:26:51.848652 sshd[5755]: Accepted publickey for core from 139.178.89.65 port 48480 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:26:51.850245 sshd[5755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:51.854727 systemd-logind[1962]: New session 12 of user core. Nov 8 00:26:51.862527 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:26:52.090278 sshd[5755]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:52.095200 systemd-logind[1962]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:26:52.095345 systemd[1]: sshd@11-172.31.25.121:22-139.178.89.65:48480.service: Deactivated successfully. Nov 8 00:26:52.098870 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:26:52.100779 systemd-logind[1962]: Removed session 12. Nov 8 00:26:52.128604 systemd[1]: Started sshd@12-172.31.25.121:22-139.178.89.65:48492.service - OpenSSH per-connection server daemon (139.178.89.65:48492). Nov 8 00:26:52.306332 sshd[5769]: Accepted publickey for core from 139.178.89.65 port 48492 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:26:52.307891 sshd[5769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:52.313580 systemd-logind[1962]: New session 13 of user core. Nov 8 00:26:52.320503 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:26:52.585454 sshd[5769]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:52.594028 systemd[1]: sshd@12-172.31.25.121:22-139.178.89.65:48492.service: Deactivated successfully. Nov 8 00:26:52.598919 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:26:52.601336 systemd-logind[1962]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:26:52.603034 systemd-logind[1962]: Removed session 13. Nov 8 00:26:52.621276 systemd[1]: Started sshd@13-172.31.25.121:22-139.178.89.65:48500.service - OpenSSH per-connection server daemon (139.178.89.65:48500). Nov 8 00:26:52.789327 sshd[5782]: Accepted publickey for core from 139.178.89.65 port 48500 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:26:52.790872 sshd[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:52.797224 systemd-logind[1962]: New session 14 of user core. Nov 8 00:26:52.803099 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:26:53.046049 sshd[5782]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:53.052004 systemd[1]: sshd@13-172.31.25.121:22-139.178.89.65:48500.service: Deactivated successfully. Nov 8 00:26:53.054805 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:26:53.055868 systemd-logind[1962]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:26:53.057634 systemd-logind[1962]: Removed session 14. Nov 8 00:26:53.656522 containerd[1986]: time="2025-11-08T00:26:53.656480906Z" level=info msg="StopPodSandbox for \"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff\"" Nov 8 00:26:53.806824 containerd[1986]: 2025-11-08 00:26:53.740 [WARNING][5802] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"9ea252f2-da76-4fe0-acdf-c4fc18ba31ac", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e", Pod:"coredns-66bc5c9577-phbxx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliae301902e16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:53.806824 containerd[1986]: 2025-11-08 00:26:53.740 [INFO][5802] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" Nov 8 00:26:53.806824 containerd[1986]: 2025-11-08 00:26:53.740 [INFO][5802] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" iface="eth0" netns="" Nov 8 00:26:53.806824 containerd[1986]: 2025-11-08 00:26:53.741 [INFO][5802] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" Nov 8 00:26:53.806824 containerd[1986]: 2025-11-08 00:26:53.741 [INFO][5802] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" Nov 8 00:26:53.806824 containerd[1986]: 2025-11-08 00:26:53.787 [INFO][5815] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" HandleID="k8s-pod-network.6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" Workload="ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-eth0" Nov 8 00:26:53.806824 containerd[1986]: 2025-11-08 00:26:53.787 [INFO][5815] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:53.806824 containerd[1986]: 2025-11-08 00:26:53.787 [INFO][5815] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:53.806824 containerd[1986]: 2025-11-08 00:26:53.798 [WARNING][5815] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" HandleID="k8s-pod-network.6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" Workload="ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-eth0" Nov 8 00:26:53.806824 containerd[1986]: 2025-11-08 00:26:53.798 [INFO][5815] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" HandleID="k8s-pod-network.6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" Workload="ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-eth0" Nov 8 00:26:53.806824 containerd[1986]: 2025-11-08 00:26:53.800 [INFO][5815] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:53.806824 containerd[1986]: 2025-11-08 00:26:53.803 [INFO][5802] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" Nov 8 00:26:53.807913 containerd[1986]: time="2025-11-08T00:26:53.807353332Z" level=info msg="TearDown network for sandbox \"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff\" successfully" Nov 8 00:26:53.807913 containerd[1986]: time="2025-11-08T00:26:53.807396992Z" level=info msg="StopPodSandbox for \"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff\" returns successfully" Nov 8 00:26:53.821867 containerd[1986]: time="2025-11-08T00:26:53.821513141Z" level=info msg="RemovePodSandbox for \"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff\"" Nov 8 00:26:53.821867 containerd[1986]: time="2025-11-08T00:26:53.821563080Z" level=info msg="Forcibly stopping sandbox \"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff\"" Nov 8 00:26:53.935799 containerd[1986]: 2025-11-08 00:26:53.883 [WARNING][5831] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"9ea252f2-da76-4fe0-acdf-c4fc18ba31ac", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"799614c9b68e42f830e967ac4a4ab89905b1105799db70910062de5ea00c050e", Pod:"coredns-66bc5c9577-phbxx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliae301902e16", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:53.935799 containerd[1986]: 2025-11-08 00:26:53.884 [INFO][5831] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" Nov 8 00:26:53.935799 containerd[1986]: 2025-11-08 00:26:53.884 [INFO][5831] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" iface="eth0" netns="" Nov 8 00:26:53.935799 containerd[1986]: 2025-11-08 00:26:53.884 [INFO][5831] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" Nov 8 00:26:53.935799 containerd[1986]: 2025-11-08 00:26:53.884 [INFO][5831] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" Nov 8 00:26:53.935799 containerd[1986]: 2025-11-08 00:26:53.919 [INFO][5838] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" HandleID="k8s-pod-network.6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" Workload="ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-eth0" Nov 8 00:26:53.935799 containerd[1986]: 2025-11-08 00:26:53.920 [INFO][5838] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:53.935799 containerd[1986]: 2025-11-08 00:26:53.920 [INFO][5838] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:53.935799 containerd[1986]: 2025-11-08 00:26:53.929 [WARNING][5838] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" HandleID="k8s-pod-network.6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" Workload="ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-eth0" Nov 8 00:26:53.935799 containerd[1986]: 2025-11-08 00:26:53.929 [INFO][5838] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" HandleID="k8s-pod-network.6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" Workload="ip--172--31--25--121-k8s-coredns--66bc5c9577--phbxx-eth0" Nov 8 00:26:53.935799 containerd[1986]: 2025-11-08 00:26:53.931 [INFO][5838] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:53.935799 containerd[1986]: 2025-11-08 00:26:53.933 [INFO][5831] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff" Nov 8 00:26:53.935799 containerd[1986]: time="2025-11-08T00:26:53.935512190Z" level=info msg="TearDown network for sandbox \"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff\" successfully" Nov 8 00:26:53.947373 containerd[1986]: time="2025-11-08T00:26:53.947329823Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:53.947531 containerd[1986]: time="2025-11-08T00:26:53.947418410Z" level=info msg="RemovePodSandbox \"6b41cff05dbc2b01a8c5e32a02c6bc25d64ea72fc73b6326f29eb4cbbfdb4bff\" returns successfully" Nov 8 00:26:53.948096 containerd[1986]: time="2025-11-08T00:26:53.948060988Z" level=info msg="StopPodSandbox for \"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224\"" Nov 8 00:26:54.023390 containerd[1986]: 2025-11-08 00:26:53.986 [WARNING][5852] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-eth0", GenerateName:"calico-apiserver-5d84f7c9c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"7517a6de-bfae-458e-a17f-83662a231d90", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d84f7c9c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59", Pod:"calico-apiserver-5d84f7c9c6-th4rp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90fa7b73f43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:54.023390 containerd[1986]: 2025-11-08 00:26:53.986 [INFO][5852] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" Nov 8 00:26:54.023390 containerd[1986]: 2025-11-08 00:26:53.986 [INFO][5852] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" iface="eth0" netns="" Nov 8 00:26:54.023390 containerd[1986]: 2025-11-08 00:26:53.986 [INFO][5852] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" Nov 8 00:26:54.023390 containerd[1986]: 2025-11-08 00:26:53.986 [INFO][5852] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" Nov 8 00:26:54.023390 containerd[1986]: 2025-11-08 00:26:54.010 [INFO][5859] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" HandleID="k8s-pod-network.dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" Workload="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-eth0" Nov 8 00:26:54.023390 containerd[1986]: 2025-11-08 00:26:54.010 [INFO][5859] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:54.023390 containerd[1986]: 2025-11-08 00:26:54.010 [INFO][5859] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:54.023390 containerd[1986]: 2025-11-08 00:26:54.017 [WARNING][5859] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" HandleID="k8s-pod-network.dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" Workload="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-eth0" Nov 8 00:26:54.023390 containerd[1986]: 2025-11-08 00:26:54.017 [INFO][5859] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" HandleID="k8s-pod-network.dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" Workload="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-eth0" Nov 8 00:26:54.023390 containerd[1986]: 2025-11-08 00:26:54.019 [INFO][5859] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:54.023390 containerd[1986]: 2025-11-08 00:26:54.021 [INFO][5852] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" Nov 8 00:26:54.024436 containerd[1986]: time="2025-11-08T00:26:54.023438573Z" level=info msg="TearDown network for sandbox \"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224\" successfully" Nov 8 00:26:54.024436 containerd[1986]: time="2025-11-08T00:26:54.023468514Z" level=info msg="StopPodSandbox for \"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224\" returns successfully" Nov 8 00:26:54.024436 containerd[1986]: time="2025-11-08T00:26:54.024277861Z" level=info msg="RemovePodSandbox for \"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224\"" Nov 8 00:26:54.024436 containerd[1986]: time="2025-11-08T00:26:54.024345478Z" level=info msg="Forcibly stopping sandbox \"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224\"" Nov 8 00:26:54.103932 containerd[1986]: 2025-11-08 00:26:54.063 [WARNING][5873] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-eth0", GenerateName:"calico-apiserver-5d84f7c9c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"7517a6de-bfae-458e-a17f-83662a231d90", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d84f7c9c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"997db2a3133eb6da80281ce7f347794ee19ede2f19fbeea6825ce81615d0bb59", Pod:"calico-apiserver-5d84f7c9c6-th4rp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90fa7b73f43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:54.103932 containerd[1986]: 2025-11-08 00:26:54.063 [INFO][5873] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" Nov 8 00:26:54.103932 containerd[1986]: 2025-11-08 00:26:54.063 [INFO][5873] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" iface="eth0" netns="" Nov 8 00:26:54.103932 containerd[1986]: 2025-11-08 00:26:54.063 [INFO][5873] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" Nov 8 00:26:54.103932 containerd[1986]: 2025-11-08 00:26:54.063 [INFO][5873] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" Nov 8 00:26:54.103932 containerd[1986]: 2025-11-08 00:26:54.089 [INFO][5880] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" HandleID="k8s-pod-network.dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" Workload="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-eth0" Nov 8 00:26:54.103932 containerd[1986]: 2025-11-08 00:26:54.089 [INFO][5880] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:54.103932 containerd[1986]: 2025-11-08 00:26:54.089 [INFO][5880] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:54.103932 containerd[1986]: 2025-11-08 00:26:54.096 [WARNING][5880] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" HandleID="k8s-pod-network.dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" Workload="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-eth0" Nov 8 00:26:54.103932 containerd[1986]: 2025-11-08 00:26:54.096 [INFO][5880] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" HandleID="k8s-pod-network.dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" Workload="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--th4rp-eth0" Nov 8 00:26:54.103932 containerd[1986]: 2025-11-08 00:26:54.098 [INFO][5880] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:54.103932 containerd[1986]: 2025-11-08 00:26:54.101 [INFO][5873] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224" Nov 8 00:26:54.105849 containerd[1986]: time="2025-11-08T00:26:54.103982062Z" level=info msg="TearDown network for sandbox \"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224\" successfully" Nov 8 00:26:54.110138 containerd[1986]: time="2025-11-08T00:26:54.110071411Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:54.110138 containerd[1986]: time="2025-11-08T00:26:54.110139350Z" level=info msg="RemovePodSandbox \"dbd5a1296b828132d28e132d5a1a32f3e7afc902acf9d625db9044df9b9ef224\" returns successfully" Nov 8 00:26:54.111158 containerd[1986]: time="2025-11-08T00:26:54.110770448Z" level=info msg="StopPodSandbox for \"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc\"" Nov 8 00:26:54.187970 containerd[1986]: 2025-11-08 00:26:54.152 [WARNING][5894] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"75831af6-29ac-43d1-829f-acf86112d6f8", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241", Pod:"coredns-66bc5c9577-vhxwd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c1f96d2867", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:54.187970 containerd[1986]: 2025-11-08 00:26:54.152 [INFO][5894] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" Nov 8 00:26:54.187970 containerd[1986]: 2025-11-08 00:26:54.152 [INFO][5894] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" iface="eth0" netns="" Nov 8 00:26:54.187970 containerd[1986]: 2025-11-08 00:26:54.152 [INFO][5894] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" Nov 8 00:26:54.187970 containerd[1986]: 2025-11-08 00:26:54.152 [INFO][5894] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" Nov 8 00:26:54.187970 containerd[1986]: 2025-11-08 00:26:54.175 [INFO][5901] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" HandleID="k8s-pod-network.2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" Workload="ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-eth0" Nov 8 00:26:54.187970 containerd[1986]: 2025-11-08 00:26:54.175 [INFO][5901] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:54.187970 containerd[1986]: 2025-11-08 00:26:54.175 [INFO][5901] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:54.187970 containerd[1986]: 2025-11-08 00:26:54.182 [WARNING][5901] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" HandleID="k8s-pod-network.2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" Workload="ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-eth0" Nov 8 00:26:54.187970 containerd[1986]: 2025-11-08 00:26:54.182 [INFO][5901] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" HandleID="k8s-pod-network.2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" Workload="ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-eth0" Nov 8 00:26:54.187970 containerd[1986]: 2025-11-08 00:26:54.184 [INFO][5901] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:54.187970 containerd[1986]: 2025-11-08 00:26:54.185 [INFO][5894] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" Nov 8 00:26:54.187970 containerd[1986]: time="2025-11-08T00:26:54.187948190Z" level=info msg="TearDown network for sandbox \"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc\" successfully" Nov 8 00:26:54.187970 containerd[1986]: time="2025-11-08T00:26:54.187971469Z" level=info msg="StopPodSandbox for \"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc\" returns successfully" Nov 8 00:26:54.190346 containerd[1986]: time="2025-11-08T00:26:54.190159526Z" level=info msg="RemovePodSandbox for \"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc\"" Nov 8 00:26:54.190346 containerd[1986]: time="2025-11-08T00:26:54.190194142Z" level=info msg="Forcibly stopping sandbox \"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc\"" Nov 8 00:26:54.283845 containerd[1986]: 2025-11-08 00:26:54.240 [WARNING][5915] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"75831af6-29ac-43d1-829f-acf86112d6f8", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"55ce1950307ea007cd713f84a427cdf9c979848cda47a4bb83f3e660352cd241", Pod:"coredns-66bc5c9577-vhxwd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c1f96d2867", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:54.283845 containerd[1986]: 2025-11-08 00:26:54.241 [INFO][5915] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" Nov 8 00:26:54.283845 containerd[1986]: 2025-11-08 00:26:54.241 [INFO][5915] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" iface="eth0" netns="" Nov 8 00:26:54.283845 containerd[1986]: 2025-11-08 00:26:54.241 [INFO][5915] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" Nov 8 00:26:54.283845 containerd[1986]: 2025-11-08 00:26:54.241 [INFO][5915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" Nov 8 00:26:54.283845 containerd[1986]: 2025-11-08 00:26:54.270 [INFO][5923] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" HandleID="k8s-pod-network.2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" Workload="ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-eth0" Nov 8 00:26:54.283845 containerd[1986]: 2025-11-08 00:26:54.270 [INFO][5923] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:54.283845 containerd[1986]: 2025-11-08 00:26:54.270 [INFO][5923] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:54.283845 containerd[1986]: 2025-11-08 00:26:54.278 [WARNING][5923] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" HandleID="k8s-pod-network.2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" Workload="ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-eth0" Nov 8 00:26:54.283845 containerd[1986]: 2025-11-08 00:26:54.278 [INFO][5923] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" HandleID="k8s-pod-network.2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" Workload="ip--172--31--25--121-k8s-coredns--66bc5c9577--vhxwd-eth0" Nov 8 00:26:54.283845 containerd[1986]: 2025-11-08 00:26:54.279 [INFO][5923] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:54.283845 containerd[1986]: 2025-11-08 00:26:54.281 [INFO][5915] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc" Nov 8 00:26:54.284584 containerd[1986]: time="2025-11-08T00:26:54.283897500Z" level=info msg="TearDown network for sandbox \"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc\" successfully" Nov 8 00:26:54.289480 containerd[1986]: time="2025-11-08T00:26:54.289422595Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:54.289650 containerd[1986]: time="2025-11-08T00:26:54.289491780Z" level=info msg="RemovePodSandbox \"2d4744b33adb53ec218d6795a395bda176042151b2e8639f47df221aa4767cbc\" returns successfully" Nov 8 00:26:54.290259 containerd[1986]: time="2025-11-08T00:26:54.289988206Z" level=info msg="StopPodSandbox for \"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c\"" Nov 8 00:26:54.373785 containerd[1986]: 2025-11-08 00:26:54.331 [WARNING][5937] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"59621f83-2f27-42e2-8c18-c119c79f6847", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504", Pod:"goldmane-7c778bb748-qb5jn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.65.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic2a0c1ada60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:54.373785 containerd[1986]: 2025-11-08 00:26:54.332 [INFO][5937] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" Nov 8 00:26:54.373785 containerd[1986]: 2025-11-08 00:26:54.332 [INFO][5937] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" iface="eth0" netns="" Nov 8 00:26:54.373785 containerd[1986]: 2025-11-08 00:26:54.332 [INFO][5937] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" Nov 8 00:26:54.373785 containerd[1986]: 2025-11-08 00:26:54.332 [INFO][5937] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" Nov 8 00:26:54.373785 containerd[1986]: 2025-11-08 00:26:54.361 [INFO][5944] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" HandleID="k8s-pod-network.fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" Workload="ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-eth0" Nov 8 00:26:54.373785 containerd[1986]: 2025-11-08 00:26:54.361 [INFO][5944] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:54.373785 containerd[1986]: 2025-11-08 00:26:54.361 [INFO][5944] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:54.373785 containerd[1986]: 2025-11-08 00:26:54.368 [WARNING][5944] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" HandleID="k8s-pod-network.fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" Workload="ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-eth0" Nov 8 00:26:54.373785 containerd[1986]: 2025-11-08 00:26:54.368 [INFO][5944] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" HandleID="k8s-pod-network.fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" Workload="ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-eth0" Nov 8 00:26:54.373785 containerd[1986]: 2025-11-08 00:26:54.369 [INFO][5944] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:54.373785 containerd[1986]: 2025-11-08 00:26:54.371 [INFO][5937] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" Nov 8 00:26:54.373785 containerd[1986]: time="2025-11-08T00:26:54.373780695Z" level=info msg="TearDown network for sandbox \"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c\" successfully" Nov 8 00:26:54.374670 containerd[1986]: time="2025-11-08T00:26:54.373805677Z" level=info msg="StopPodSandbox for \"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c\" returns successfully" Nov 8 00:26:54.375037 containerd[1986]: time="2025-11-08T00:26:54.374733104Z" level=info msg="RemovePodSandbox for \"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c\"" Nov 8 00:26:54.375037 containerd[1986]: time="2025-11-08T00:26:54.374783811Z" level=info msg="Forcibly stopping sandbox \"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c\"" Nov 8 00:26:54.450267 containerd[1986]: 2025-11-08 00:26:54.411 [WARNING][5959] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"59621f83-2f27-42e2-8c18-c119c79f6847", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"06c04b3a1ed1b654765bcb09745a1b39b4d57e92550b5818f743738b8bee5504", Pod:"goldmane-7c778bb748-qb5jn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.65.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic2a0c1ada60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:54.450267 containerd[1986]: 2025-11-08 00:26:54.411 [INFO][5959] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" Nov 8 00:26:54.450267 containerd[1986]: 2025-11-08 00:26:54.411 [INFO][5959] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" iface="eth0" netns="" Nov 8 00:26:54.450267 containerd[1986]: 2025-11-08 00:26:54.411 [INFO][5959] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" Nov 8 00:26:54.450267 containerd[1986]: 2025-11-08 00:26:54.411 [INFO][5959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" Nov 8 00:26:54.450267 containerd[1986]: 2025-11-08 00:26:54.437 [INFO][5967] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" HandleID="k8s-pod-network.fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" Workload="ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-eth0" Nov 8 00:26:54.450267 containerd[1986]: 2025-11-08 00:26:54.437 [INFO][5967] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:54.450267 containerd[1986]: 2025-11-08 00:26:54.437 [INFO][5967] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:54.450267 containerd[1986]: 2025-11-08 00:26:54.444 [WARNING][5967] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" HandleID="k8s-pod-network.fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" Workload="ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-eth0" Nov 8 00:26:54.450267 containerd[1986]: 2025-11-08 00:26:54.444 [INFO][5967] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" HandleID="k8s-pod-network.fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" Workload="ip--172--31--25--121-k8s-goldmane--7c778bb748--qb5jn-eth0" Nov 8 00:26:54.450267 containerd[1986]: 2025-11-08 00:26:54.446 [INFO][5967] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:54.450267 containerd[1986]: 2025-11-08 00:26:54.448 [INFO][5959] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c" Nov 8 00:26:54.450267 containerd[1986]: time="2025-11-08T00:26:54.450240149Z" level=info msg="TearDown network for sandbox \"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c\" successfully" Nov 8 00:26:54.456868 containerd[1986]: time="2025-11-08T00:26:54.456828306Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:54.457060 containerd[1986]: time="2025-11-08T00:26:54.457029898Z" level=info msg="RemovePodSandbox \"fab8aafd3d83704cfdcf6ecd68fcc705881b0bc15b2951c9f6f9ed8ca78d8c8c\" returns successfully" Nov 8 00:26:54.457865 containerd[1986]: time="2025-11-08T00:26:54.457571392Z" level=info msg="StopPodSandbox for \"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707\"" Nov 8 00:26:54.533467 containerd[1986]: 2025-11-08 00:26:54.493 [WARNING][5982] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" WorkloadEndpoint="ip--172--31--25--121-k8s-whisker--5f5fdfdfd5--8qkgd-eth0" Nov 8 00:26:54.533467 containerd[1986]: 2025-11-08 00:26:54.493 [INFO][5982] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" Nov 8 00:26:54.533467 containerd[1986]: 2025-11-08 00:26:54.493 [INFO][5982] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" iface="eth0" netns="" Nov 8 00:26:54.533467 containerd[1986]: 2025-11-08 00:26:54.493 [INFO][5982] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" Nov 8 00:26:54.533467 containerd[1986]: 2025-11-08 00:26:54.493 [INFO][5982] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" Nov 8 00:26:54.533467 containerd[1986]: 2025-11-08 00:26:54.520 [INFO][5989] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" HandleID="k8s-pod-network.a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" Workload="ip--172--31--25--121-k8s-whisker--5f5fdfdfd5--8qkgd-eth0" Nov 8 00:26:54.533467 containerd[1986]: 2025-11-08 00:26:54.520 [INFO][5989] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:54.533467 containerd[1986]: 2025-11-08 00:26:54.520 [INFO][5989] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:54.533467 containerd[1986]: 2025-11-08 00:26:54.527 [WARNING][5989] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" HandleID="k8s-pod-network.a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" Workload="ip--172--31--25--121-k8s-whisker--5f5fdfdfd5--8qkgd-eth0" Nov 8 00:26:54.533467 containerd[1986]: 2025-11-08 00:26:54.527 [INFO][5989] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" HandleID="k8s-pod-network.a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" Workload="ip--172--31--25--121-k8s-whisker--5f5fdfdfd5--8qkgd-eth0" Nov 8 00:26:54.533467 containerd[1986]: 2025-11-08 00:26:54.529 [INFO][5989] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:54.533467 containerd[1986]: 2025-11-08 00:26:54.531 [INFO][5982] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" Nov 8 00:26:54.534161 containerd[1986]: time="2025-11-08T00:26:54.533518903Z" level=info msg="TearDown network for sandbox \"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707\" successfully" Nov 8 00:26:54.534161 containerd[1986]: time="2025-11-08T00:26:54.533550911Z" level=info msg="StopPodSandbox for \"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707\" returns successfully" Nov 8 00:26:54.534161 containerd[1986]: time="2025-11-08T00:26:54.534082716Z" level=info msg="RemovePodSandbox for \"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707\"" Nov 8 00:26:54.534161 containerd[1986]: time="2025-11-08T00:26:54.534118325Z" level=info msg="Forcibly stopping sandbox \"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707\"" Nov 8 00:26:54.632335 containerd[1986]: 2025-11-08 00:26:54.576 [WARNING][6003] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" WorkloadEndpoint="ip--172--31--25--121-k8s-whisker--5f5fdfdfd5--8qkgd-eth0" Nov 8 00:26:54.632335 containerd[1986]: 2025-11-08 00:26:54.576 [INFO][6003] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" Nov 8 00:26:54.632335 containerd[1986]: 2025-11-08 00:26:54.576 [INFO][6003] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" iface="eth0" netns="" Nov 8 00:26:54.632335 containerd[1986]: 2025-11-08 00:26:54.576 [INFO][6003] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" Nov 8 00:26:54.632335 containerd[1986]: 2025-11-08 00:26:54.576 [INFO][6003] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" Nov 8 00:26:54.632335 containerd[1986]: 2025-11-08 00:26:54.619 [INFO][6010] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" HandleID="k8s-pod-network.a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" Workload="ip--172--31--25--121-k8s-whisker--5f5fdfdfd5--8qkgd-eth0" Nov 8 00:26:54.632335 containerd[1986]: 2025-11-08 00:26:54.619 [INFO][6010] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:54.632335 containerd[1986]: 2025-11-08 00:26:54.619 [INFO][6010] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:54.632335 containerd[1986]: 2025-11-08 00:26:54.626 [WARNING][6010] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" HandleID="k8s-pod-network.a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" Workload="ip--172--31--25--121-k8s-whisker--5f5fdfdfd5--8qkgd-eth0" Nov 8 00:26:54.632335 containerd[1986]: 2025-11-08 00:26:54.626 [INFO][6010] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" HandleID="k8s-pod-network.a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" Workload="ip--172--31--25--121-k8s-whisker--5f5fdfdfd5--8qkgd-eth0" Nov 8 00:26:54.632335 containerd[1986]: 2025-11-08 00:26:54.627 [INFO][6010] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:54.632335 containerd[1986]: 2025-11-08 00:26:54.629 [INFO][6003] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707" Nov 8 00:26:54.632335 containerd[1986]: time="2025-11-08T00:26:54.631766495Z" level=info msg="TearDown network for sandbox \"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707\" successfully" Nov 8 00:26:54.637832 containerd[1986]: time="2025-11-08T00:26:54.637670698Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:54.637832 containerd[1986]: time="2025-11-08T00:26:54.637736916Z" level=info msg="RemovePodSandbox \"a12d96d9bcc71d64ff4abd6308f8137ebbd772ccbe93dbca24f22bc101620707\" returns successfully" Nov 8 00:26:54.638549 containerd[1986]: time="2025-11-08T00:26:54.638245819Z" level=info msg="StopPodSandbox for \"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17\"" Nov 8 00:26:54.687755 containerd[1986]: time="2025-11-08T00:26:54.687710967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:26:54.731554 containerd[1986]: 2025-11-08 00:26:54.676 [WARNING][6025] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-eth0", GenerateName:"calico-apiserver-5d84f7c9c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"36acaf38-ef21-4c55-a6b7-ba0516894e6c", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d84f7c9c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac", Pod:"calico-apiserver-5d84f7c9c6-r5rdl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib3014fe1dd0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:54.731554 containerd[1986]: 2025-11-08 00:26:54.676 [INFO][6025] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" Nov 8 00:26:54.731554 containerd[1986]: 2025-11-08 00:26:54.676 [INFO][6025] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" iface="eth0" netns="" Nov 8 00:26:54.731554 containerd[1986]: 2025-11-08 00:26:54.676 [INFO][6025] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" Nov 8 00:26:54.731554 containerd[1986]: 2025-11-08 00:26:54.676 [INFO][6025] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" Nov 8 00:26:54.731554 containerd[1986]: 2025-11-08 00:26:54.712 [INFO][6032] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" HandleID="k8s-pod-network.208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" Workload="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-eth0" Nov 8 00:26:54.731554 containerd[1986]: 2025-11-08 00:26:54.712 [INFO][6032] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:54.731554 containerd[1986]: 2025-11-08 00:26:54.712 [INFO][6032] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:54.731554 containerd[1986]: 2025-11-08 00:26:54.722 [WARNING][6032] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" HandleID="k8s-pod-network.208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" Workload="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-eth0" Nov 8 00:26:54.731554 containerd[1986]: 2025-11-08 00:26:54.722 [INFO][6032] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" HandleID="k8s-pod-network.208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" Workload="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-eth0" Nov 8 00:26:54.731554 containerd[1986]: 2025-11-08 00:26:54.724 [INFO][6032] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:54.731554 containerd[1986]: 2025-11-08 00:26:54.726 [INFO][6025] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" Nov 8 00:26:54.733887 containerd[1986]: time="2025-11-08T00:26:54.733416726Z" level=info msg="TearDown network for sandbox \"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17\" successfully" Nov 8 00:26:54.733887 containerd[1986]: time="2025-11-08T00:26:54.733459230Z" level=info msg="StopPodSandbox for \"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17\" returns successfully" Nov 8 00:26:54.734356 containerd[1986]: time="2025-11-08T00:26:54.734022885Z" level=info msg="RemovePodSandbox for \"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17\"" Nov 8 00:26:54.734356 containerd[1986]: time="2025-11-08T00:26:54.734066897Z" level=info msg="Forcibly stopping sandbox \"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17\"" Nov 8 00:26:54.806051 containerd[1986]: 2025-11-08 00:26:54.771 [WARNING][6046] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-eth0", GenerateName:"calico-apiserver-5d84f7c9c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"36acaf38-ef21-4c55-a6b7-ba0516894e6c", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d84f7c9c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"f3b1a472492382ccdcf2cd8d9b988e22ed5cc19e4281452220c059503d6cccac", Pod:"calico-apiserver-5d84f7c9c6-r5rdl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib3014fe1dd0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:54.806051 containerd[1986]: 2025-11-08 00:26:54.771 [INFO][6046] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" Nov 8 00:26:54.806051 containerd[1986]: 2025-11-08 00:26:54.771 [INFO][6046] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" iface="eth0" netns="" Nov 8 00:26:54.806051 containerd[1986]: 2025-11-08 00:26:54.771 [INFO][6046] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" Nov 8 00:26:54.806051 containerd[1986]: 2025-11-08 00:26:54.771 [INFO][6046] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" Nov 8 00:26:54.806051 containerd[1986]: 2025-11-08 00:26:54.793 [INFO][6054] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" HandleID="k8s-pod-network.208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" Workload="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-eth0" Nov 8 00:26:54.806051 containerd[1986]: 2025-11-08 00:26:54.793 [INFO][6054] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:54.806051 containerd[1986]: 2025-11-08 00:26:54.794 [INFO][6054] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:54.806051 containerd[1986]: 2025-11-08 00:26:54.800 [WARNING][6054] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" HandleID="k8s-pod-network.208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" Workload="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-eth0" Nov 8 00:26:54.806051 containerd[1986]: 2025-11-08 00:26:54.800 [INFO][6054] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" HandleID="k8s-pod-network.208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" Workload="ip--172--31--25--121-k8s-calico--apiserver--5d84f7c9c6--r5rdl-eth0" Nov 8 00:26:54.806051 containerd[1986]: 2025-11-08 00:26:54.802 [INFO][6054] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:54.806051 containerd[1986]: 2025-11-08 00:26:54.804 [INFO][6046] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17" Nov 8 00:26:54.806540 containerd[1986]: time="2025-11-08T00:26:54.806113950Z" level=info msg="TearDown network for sandbox \"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17\" successfully" Nov 8 00:26:54.811425 containerd[1986]: time="2025-11-08T00:26:54.811363081Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:54.811425 containerd[1986]: time="2025-11-08T00:26:54.811422818Z" level=info msg="RemovePodSandbox \"208c3adbccb70ed3d48590cf252f06dc977ed833e9a29791a765da63f8fa3c17\" returns successfully" Nov 8 00:26:54.811994 containerd[1986]: time="2025-11-08T00:26:54.811959202Z" level=info msg="StopPodSandbox for \"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1\"" Nov 8 00:26:54.887993 containerd[1986]: 2025-11-08 00:26:54.849 [WARNING][6068] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-eth0", GenerateName:"calico-kube-controllers-756f78cd95-", Namespace:"calico-system", SelfLink:"", UID:"2854d816-9155-4f6f-a8ba-78872a67ac8c", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"756f78cd95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c", Pod:"calico-kube-controllers-756f78cd95-ppxpv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia7fc7e28218", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:54.887993 containerd[1986]: 2025-11-08 00:26:54.849 [INFO][6068] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" Nov 8 00:26:54.887993 containerd[1986]: 2025-11-08 00:26:54.849 [INFO][6068] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" iface="eth0" netns="" Nov 8 00:26:54.887993 containerd[1986]: 2025-11-08 00:26:54.849 [INFO][6068] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" Nov 8 00:26:54.887993 containerd[1986]: 2025-11-08 00:26:54.849 [INFO][6068] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" Nov 8 00:26:54.887993 containerd[1986]: 2025-11-08 00:26:54.872 [INFO][6075] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" HandleID="k8s-pod-network.179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" Workload="ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-eth0" Nov 8 00:26:54.887993 containerd[1986]: 2025-11-08 00:26:54.872 [INFO][6075] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:54.887993 containerd[1986]: 2025-11-08 00:26:54.872 [INFO][6075] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:54.887993 containerd[1986]: 2025-11-08 00:26:54.882 [WARNING][6075] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" HandleID="k8s-pod-network.179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" Workload="ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-eth0" Nov 8 00:26:54.887993 containerd[1986]: 2025-11-08 00:26:54.882 [INFO][6075] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" HandleID="k8s-pod-network.179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" Workload="ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-eth0" Nov 8 00:26:54.887993 containerd[1986]: 2025-11-08 00:26:54.884 [INFO][6075] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:54.887993 containerd[1986]: 2025-11-08 00:26:54.886 [INFO][6068] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" Nov 8 00:26:54.888678 containerd[1986]: time="2025-11-08T00:26:54.888032495Z" level=info msg="TearDown network for sandbox \"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1\" successfully" Nov 8 00:26:54.888678 containerd[1986]: time="2025-11-08T00:26:54.888055633Z" level=info msg="StopPodSandbox for \"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1\" returns successfully" Nov 8 00:26:54.888678 containerd[1986]: time="2025-11-08T00:26:54.888596929Z" level=info msg="RemovePodSandbox for \"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1\"" Nov 8 00:26:54.888678 containerd[1986]: time="2025-11-08T00:26:54.888620784Z" level=info msg="Forcibly stopping sandbox \"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1\"" Nov 8 00:26:54.962706 containerd[1986]: 2025-11-08 00:26:54.925 [WARNING][6089] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-eth0", GenerateName:"calico-kube-controllers-756f78cd95-", Namespace:"calico-system", SelfLink:"", UID:"2854d816-9155-4f6f-a8ba-78872a67ac8c", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"756f78cd95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"09bfb9c0c91e22dae6bd924c8523864cc361f116da0bd593f0443919671f0f7c", Pod:"calico-kube-controllers-756f78cd95-ppxpv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia7fc7e28218", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:54.962706 containerd[1986]: 2025-11-08 00:26:54.925 [INFO][6089] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" Nov 8 00:26:54.962706 containerd[1986]: 2025-11-08 00:26:54.925 [INFO][6089] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" iface="eth0" netns="" Nov 8 00:26:54.962706 containerd[1986]: 2025-11-08 00:26:54.925 [INFO][6089] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" Nov 8 00:26:54.962706 containerd[1986]: 2025-11-08 00:26:54.925 [INFO][6089] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" Nov 8 00:26:54.962706 containerd[1986]: 2025-11-08 00:26:54.948 [INFO][6097] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" HandleID="k8s-pod-network.179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" Workload="ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-eth0" Nov 8 00:26:54.962706 containerd[1986]: 2025-11-08 00:26:54.948 [INFO][6097] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:54.962706 containerd[1986]: 2025-11-08 00:26:54.948 [INFO][6097] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:54.962706 containerd[1986]: 2025-11-08 00:26:54.956 [WARNING][6097] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" HandleID="k8s-pod-network.179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" Workload="ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-eth0" Nov 8 00:26:54.962706 containerd[1986]: 2025-11-08 00:26:54.956 [INFO][6097] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" HandleID="k8s-pod-network.179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" Workload="ip--172--31--25--121-k8s-calico--kube--controllers--756f78cd95--ppxpv-eth0" Nov 8 00:26:54.962706 containerd[1986]: 2025-11-08 00:26:54.958 [INFO][6097] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:54.962706 containerd[1986]: 2025-11-08 00:26:54.960 [INFO][6089] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1" Nov 8 00:26:54.963863 containerd[1986]: time="2025-11-08T00:26:54.962728259Z" level=info msg="TearDown network for sandbox \"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1\" successfully" Nov 8 00:26:54.968685 containerd[1986]: time="2025-11-08T00:26:54.968640816Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:54.968824 containerd[1986]: time="2025-11-08T00:26:54.968704699Z" level=info msg="RemovePodSandbox \"179acd28360bc7ad4c19fcd9b1eede9f1157485532be545fff0805950b5436e1\" returns successfully" Nov 8 00:26:54.969315 containerd[1986]: time="2025-11-08T00:26:54.969263481Z" level=info msg="StopPodSandbox for \"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa\"" Nov 8 00:26:54.990156 containerd[1986]: time="2025-11-08T00:26:54.989854611Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:54.996351 containerd[1986]: time="2025-11-08T00:26:54.996129596Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:26:54.996696 containerd[1986]: time="2025-11-08T00:26:54.996627097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:26:54.996907 kubelet[3190]: E1108 00:26:54.996817 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:26:54.996907 kubelet[3190]: E1108 00:26:54.996872 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:26:54.999390 kubelet[3190]: E1108 00:26:54.996962 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5b76967f45-ch758_calico-system(ee745a66-da8b-4b06-b62f-77bdcb118c17): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:55.014715 containerd[1986]: time="2025-11-08T00:26:55.014663635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:26:55.066994 containerd[1986]: 2025-11-08 00:26:55.023 [WARNING][6111] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-csi--node--driver--hcwvd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"543aa209-599c-4d8e-9da3-550061520690", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba", Pod:"csi-node-driver-hcwvd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3ba53049bac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:55.066994 containerd[1986]: 2025-11-08 00:26:55.024 [INFO][6111] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" Nov 8 00:26:55.066994 containerd[1986]: 2025-11-08 00:26:55.024 [INFO][6111] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" iface="eth0" netns="" Nov 8 00:26:55.066994 containerd[1986]: 2025-11-08 00:26:55.024 [INFO][6111] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" Nov 8 00:26:55.066994 containerd[1986]: 2025-11-08 00:26:55.024 [INFO][6111] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" Nov 8 00:26:55.066994 containerd[1986]: 2025-11-08 00:26:55.053 [INFO][6119] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" HandleID="k8s-pod-network.3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" Workload="ip--172--31--25--121-k8s-csi--node--driver--hcwvd-eth0" Nov 8 00:26:55.066994 containerd[1986]: 2025-11-08 00:26:55.053 [INFO][6119] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:55.066994 containerd[1986]: 2025-11-08 00:26:55.054 [INFO][6119] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:55.066994 containerd[1986]: 2025-11-08 00:26:55.061 [WARNING][6119] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" HandleID="k8s-pod-network.3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" Workload="ip--172--31--25--121-k8s-csi--node--driver--hcwvd-eth0" Nov 8 00:26:55.066994 containerd[1986]: 2025-11-08 00:26:55.061 [INFO][6119] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" HandleID="k8s-pod-network.3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" Workload="ip--172--31--25--121-k8s-csi--node--driver--hcwvd-eth0" Nov 8 00:26:55.066994 containerd[1986]: 2025-11-08 00:26:55.062 [INFO][6119] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:55.066994 containerd[1986]: 2025-11-08 00:26:55.064 [INFO][6111] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" Nov 8 00:26:55.066994 containerd[1986]: time="2025-11-08T00:26:55.066646537Z" level=info msg="TearDown network for sandbox \"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa\" successfully" Nov 8 00:26:55.066994 containerd[1986]: time="2025-11-08T00:26:55.066676541Z" level=info msg="StopPodSandbox for \"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa\" returns successfully" Nov 8 00:26:55.067995 containerd[1986]: time="2025-11-08T00:26:55.067560458Z" level=info msg="RemovePodSandbox for \"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa\"" Nov 8 00:26:55.067995 containerd[1986]: time="2025-11-08T00:26:55.067595673Z" level=info msg="Forcibly stopping sandbox \"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa\"" Nov 8 00:26:55.145640 containerd[1986]: 2025-11-08 00:26:55.109 [WARNING][6134] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--121-k8s-csi--node--driver--hcwvd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"543aa209-599c-4d8e-9da3-550061520690", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-121", ContainerID:"7131ce40e16993683f1cf36cacf1b3e6cae88b2e353360de5e6eaf511ed1caba", Pod:"csi-node-driver-hcwvd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3ba53049bac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:55.145640 containerd[1986]: 2025-11-08 00:26:55.110 [INFO][6134] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" Nov 8 00:26:55.145640 containerd[1986]: 2025-11-08 00:26:55.110 [INFO][6134] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" iface="eth0" netns="" Nov 8 00:26:55.145640 containerd[1986]: 2025-11-08 00:26:55.110 [INFO][6134] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" Nov 8 00:26:55.145640 containerd[1986]: 2025-11-08 00:26:55.110 [INFO][6134] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" Nov 8 00:26:55.145640 containerd[1986]: 2025-11-08 00:26:55.133 [INFO][6141] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" HandleID="k8s-pod-network.3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" Workload="ip--172--31--25--121-k8s-csi--node--driver--hcwvd-eth0" Nov 8 00:26:55.145640 containerd[1986]: 2025-11-08 00:26:55.133 [INFO][6141] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:55.145640 containerd[1986]: 2025-11-08 00:26:55.133 [INFO][6141] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:55.145640 containerd[1986]: 2025-11-08 00:26:55.140 [WARNING][6141] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" HandleID="k8s-pod-network.3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" Workload="ip--172--31--25--121-k8s-csi--node--driver--hcwvd-eth0" Nov 8 00:26:55.145640 containerd[1986]: 2025-11-08 00:26:55.140 [INFO][6141] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" HandleID="k8s-pod-network.3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" Workload="ip--172--31--25--121-k8s-csi--node--driver--hcwvd-eth0" Nov 8 00:26:55.145640 containerd[1986]: 2025-11-08 00:26:55.142 [INFO][6141] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:55.145640 containerd[1986]: 2025-11-08 00:26:55.143 [INFO][6134] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa" Nov 8 00:26:55.146307 containerd[1986]: time="2025-11-08T00:26:55.145682442Z" level=info msg="TearDown network for sandbox \"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa\" successfully" Nov 8 00:26:55.151667 containerd[1986]: time="2025-11-08T00:26:55.151503968Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:55.151667 containerd[1986]: time="2025-11-08T00:26:55.151564422Z" level=info msg="RemovePodSandbox \"3781ddcf4de678c9704d7d91ad350ab6a0923c95dd06b80f16bdb0edd13210aa\" returns successfully" Nov 8 00:26:55.305419 containerd[1986]: time="2025-11-08T00:26:55.304796474Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:55.319135 containerd[1986]: time="2025-11-08T00:26:55.319047187Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:26:55.319387 containerd[1986]: time="2025-11-08T00:26:55.319137784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:26:55.319663 kubelet[3190]: E1108 00:26:55.319528 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:26:55.319663 kubelet[3190]: E1108 00:26:55.319570 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:26:55.319748 kubelet[3190]: E1108 00:26:55.319679 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5b76967f45-ch758_calico-system(ee745a66-da8b-4b06-b62f-77bdcb118c17): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:55.319780 kubelet[3190]: E1108 00:26:55.319722 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b76967f45-ch758" podUID="ee745a66-da8b-4b06-b62f-77bdcb118c17" Nov 8 00:26:55.688072 containerd[1986]: time="2025-11-08T00:26:55.688031633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:26:55.985539 containerd[1986]: time="2025-11-08T00:26:55.985408793Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:55.987357 containerd[1986]: time="2025-11-08T00:26:55.987293768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:26:55.987522 containerd[1986]: time="2025-11-08T00:26:55.987339253Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:26:55.987643 kubelet[3190]: E1108 00:26:55.987583 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:26:55.987643 kubelet[3190]: E1108 00:26:55.987636 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:26:55.987770 kubelet[3190]: E1108 00:26:55.987703 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-qb5jn_calico-system(59621f83-2f27-42e2-8c18-c119c79f6847): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:55.987770 kubelet[3190]: E1108 00:26:55.987730 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qb5jn" podUID="59621f83-2f27-42e2-8c18-c119c79f6847" Nov 8 00:26:58.087727 systemd[1]: Started sshd@14-172.31.25.121:22-139.178.89.65:58846.service - OpenSSH per-connection server daemon (139.178.89.65:58846). Nov 8 00:26:58.274572 sshd[6154]: Accepted publickey for core from 139.178.89.65 port 58846 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:26:58.278438 sshd[6154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:26:58.284531 systemd-logind[1962]: New session 15 of user core. Nov 8 00:26:58.287645 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:26:58.555400 sshd[6154]: pam_unix(sshd:session): session closed for user core Nov 8 00:26:58.561756 systemd[1]: sshd@14-172.31.25.121:22-139.178.89.65:58846.service: Deactivated successfully. Nov 8 00:26:58.564231 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:26:58.566747 systemd-logind[1962]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:26:58.568601 systemd-logind[1962]: Removed session 15. Nov 8 00:26:58.685703 containerd[1986]: time="2025-11-08T00:26:58.685627016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:26:58.966185 containerd[1986]: time="2025-11-08T00:26:58.966111533Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:58.968545 containerd[1986]: time="2025-11-08T00:26:58.968466333Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:26:58.968717 containerd[1986]: time="2025-11-08T00:26:58.968559158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:26:58.968775 kubelet[3190]: E1108 00:26:58.968741 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:26:58.969272 kubelet[3190]: E1108 00:26:58.968790 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:26:58.969272 kubelet[3190]: E1108 00:26:58.968863 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-756f78cd95-ppxpv_calico-system(2854d816-9155-4f6f-a8ba-78872a67ac8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:58.969272 kubelet[3190]: E1108 00:26:58.968894 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-756f78cd95-ppxpv" podUID="2854d816-9155-4f6f-a8ba-78872a67ac8c" Nov 8 00:26:59.688099 containerd[1986]: time="2025-11-08T00:26:59.688055564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:26:59.970936 containerd[1986]: time="2025-11-08T00:26:59.970797031Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:59.972970 containerd[1986]: time="2025-11-08T00:26:59.972917360Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:26:59.973089 containerd[1986]: time="2025-11-08T00:26:59.972936619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:26:59.973229 kubelet[3190]: E1108 00:26:59.973195 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:59.973576 kubelet[3190]: E1108 00:26:59.973237 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:59.973576 kubelet[3190]: E1108 00:26:59.973339 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d84f7c9c6-th4rp_calico-apiserver(7517a6de-bfae-458e-a17f-83662a231d90): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:59.974454 kubelet[3190]: E1108 00:26:59.974408 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-th4rp" podUID="7517a6de-bfae-458e-a17f-83662a231d90" Nov 8 00:27:00.686062 containerd[1986]: time="2025-11-08T00:27:00.685859476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:27:00.992366 containerd[1986]: time="2025-11-08T00:27:00.992224415Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:00.995759 containerd[1986]: time="2025-11-08T00:27:00.995684998Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:27:00.995759 containerd[1986]: time="2025-11-08T00:27:00.995707999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:27:00.996017 kubelet[3190]: E1108 00:27:00.995941 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:00.996017 kubelet[3190]: E1108 00:27:00.995989 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:00.996470 kubelet[3190]: E1108 00:27:00.996079 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d84f7c9c6-r5rdl_calico-apiserver(36acaf38-ef21-4c55-a6b7-ba0516894e6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:00.996470 kubelet[3190]: E1108 00:27:00.996124 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-r5rdl" podUID="36acaf38-ef21-4c55-a6b7-ba0516894e6c" Nov 8 00:27:01.702031 containerd[1986]: time="2025-11-08T00:27:01.701197871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:27:02.030073 containerd[1986]: time="2025-11-08T00:27:02.029940151Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:02.034665 containerd[1986]: time="2025-11-08T00:27:02.033709155Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:27:02.034665 containerd[1986]: time="2025-11-08T00:27:02.034600396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:27:02.037091 kubelet[3190]: E1108 00:27:02.034832 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:27:02.037091 kubelet[3190]: E1108 00:27:02.037010 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:27:02.040680 kubelet[3190]: E1108 00:27:02.037112 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-hcwvd_calico-system(543aa209-599c-4d8e-9da3-550061520690): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:02.044988 containerd[1986]: time="2025-11-08T00:27:02.044933429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:27:02.369179 containerd[1986]: time="2025-11-08T00:27:02.368920517Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:02.374411 containerd[1986]: time="2025-11-08T00:27:02.374232893Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:27:02.374595 containerd[1986]: time="2025-11-08T00:27:02.374247153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:27:02.374648 kubelet[3190]: E1108 00:27:02.374602 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:27:02.374709 kubelet[3190]: E1108 00:27:02.374655 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:27:02.374767 kubelet[3190]: E1108 00:27:02.374749 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-hcwvd_calico-system(543aa209-599c-4d8e-9da3-550061520690): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:02.375020 kubelet[3190]: E1108 00:27:02.374803 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hcwvd" podUID="543aa209-599c-4d8e-9da3-550061520690" Nov 8 00:27:03.595807 systemd[1]: Started sshd@15-172.31.25.121:22-139.178.89.65:58848.service - OpenSSH per-connection server daemon (139.178.89.65:58848). Nov 8 00:27:03.756962 sshd[6170]: Accepted publickey for core from 139.178.89.65 port 58848 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:27:03.758616 sshd[6170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:03.763792 systemd-logind[1962]: New session 16 of user core. Nov 8 00:27:03.768875 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:27:03.985670 sshd[6170]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:03.989468 systemd[1]: sshd@15-172.31.25.121:22-139.178.89.65:58848.service: Deactivated successfully. Nov 8 00:27:03.993465 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:27:03.996694 systemd-logind[1962]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:27:03.998266 systemd-logind[1962]: Removed session 16. Nov 8 00:27:08.685435 kubelet[3190]: E1108 00:27:08.685389 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qb5jn" podUID="59621f83-2f27-42e2-8c18-c119c79f6847" Nov 8 00:27:09.025915 systemd[1]: Started sshd@16-172.31.25.121:22-139.178.89.65:34688.service - OpenSSH per-connection server daemon (139.178.89.65:34688). Nov 8 00:27:09.183532 sshd[6191]: Accepted publickey for core from 139.178.89.65 port 34688 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:27:09.185131 sshd[6191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:09.191180 systemd-logind[1962]: New session 17 of user core. Nov 8 00:27:09.200625 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:27:09.401974 sshd[6191]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:09.406559 systemd-logind[1962]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:27:09.408094 systemd[1]: sshd@16-172.31.25.121:22-139.178.89.65:34688.service: Deactivated successfully. Nov 8 00:27:09.411181 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:27:09.413348 systemd-logind[1962]: Removed session 17. Nov 8 00:27:10.687917 kubelet[3190]: E1108 00:27:10.687865 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b76967f45-ch758" podUID="ee745a66-da8b-4b06-b62f-77bdcb118c17" Nov 8 00:27:11.688174 kubelet[3190]: E1108 00:27:11.688125 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-756f78cd95-ppxpv" podUID="2854d816-9155-4f6f-a8ba-78872a67ac8c" Nov 8 00:27:12.687461 kubelet[3190]: E1108 00:27:12.686956 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-th4rp" podUID="7517a6de-bfae-458e-a17f-83662a231d90" Nov 8 00:27:12.687461 kubelet[3190]: E1108 00:27:12.686957 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-r5rdl" podUID="36acaf38-ef21-4c55-a6b7-ba0516894e6c" Nov 8 00:27:14.445722 systemd[1]: Started sshd@17-172.31.25.121:22-139.178.89.65:34690.service - OpenSSH per-connection server daemon (139.178.89.65:34690). Nov 8 00:27:14.670275 sshd[6231]: Accepted publickey for core from 139.178.89.65 port 34690 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:27:14.676643 sshd[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:14.686838 systemd-logind[1962]: New session 18 of user core. Nov 8 00:27:14.693532 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:27:15.481844 sshd[6231]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:15.492165 systemd-logind[1962]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:27:15.493808 systemd[1]: sshd@17-172.31.25.121:22-139.178.89.65:34690.service: Deactivated successfully. Nov 8 00:27:15.499823 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:27:15.514522 systemd-logind[1962]: Removed session 18. Nov 8 00:27:15.519674 systemd[1]: Started sshd@18-172.31.25.121:22-139.178.89.65:34696.service - OpenSSH per-connection server daemon (139.178.89.65:34696). Nov 8 00:27:15.709117 sshd[6244]: Accepted publickey for core from 139.178.89.65 port 34696 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:27:15.711452 sshd[6244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:15.720170 systemd-logind[1962]: New session 19 of user core. Nov 8 00:27:15.723507 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:27:16.475974 sshd[6244]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:16.491881 systemd[1]: sshd@18-172.31.25.121:22-139.178.89.65:34696.service: Deactivated successfully. Nov 8 00:27:16.498076 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:27:16.499875 systemd-logind[1962]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:27:16.523570 systemd[1]: Started sshd@19-172.31.25.121:22-139.178.89.65:35860.service - OpenSSH per-connection server daemon (139.178.89.65:35860). Nov 8 00:27:16.525075 systemd-logind[1962]: Removed session 19. Nov 8 00:27:16.720995 sshd[6256]: Accepted publickey for core from 139.178.89.65 port 35860 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:27:16.724009 sshd[6256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:16.732694 systemd-logind[1962]: New session 20 of user core. Nov 8 00:27:16.738231 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:27:17.690015 kubelet[3190]: E1108 00:27:17.689962 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hcwvd" podUID="543aa209-599c-4d8e-9da3-550061520690" Nov 8 00:27:17.855276 sshd[6256]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:17.861756 systemd-logind[1962]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:27:17.862442 systemd[1]: sshd@19-172.31.25.121:22-139.178.89.65:35860.service: Deactivated successfully. Nov 8 00:27:17.867750 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:27:17.871649 systemd-logind[1962]: Removed session 20. Nov 8 00:27:17.904661 systemd[1]: Started sshd@20-172.31.25.121:22-139.178.89.65:35876.service - OpenSSH per-connection server daemon (139.178.89.65:35876). Nov 8 00:27:18.097223 sshd[6272]: Accepted publickey for core from 139.178.89.65 port 35876 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:27:18.099482 sshd[6272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:18.106912 systemd-logind[1962]: New session 21 of user core. Nov 8 00:27:18.112568 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:27:18.971403 sshd[6272]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:18.977921 systemd[1]: sshd@20-172.31.25.121:22-139.178.89.65:35876.service: Deactivated successfully. Nov 8 00:27:18.978625 systemd-logind[1962]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:27:18.985056 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:27:18.990833 systemd-logind[1962]: Removed session 21. Nov 8 00:27:19.017510 systemd[1]: Started sshd@21-172.31.25.121:22-139.178.89.65:35884.service - OpenSSH per-connection server daemon (139.178.89.65:35884). Nov 8 00:27:19.247545 sshd[6285]: Accepted publickey for core from 139.178.89.65 port 35884 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:27:19.248899 sshd[6285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:19.257858 systemd-logind[1962]: New session 22 of user core. Nov 8 00:27:19.263504 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:27:19.537616 sshd[6285]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:19.544526 systemd[1]: sshd@21-172.31.25.121:22-139.178.89.65:35884.service: Deactivated successfully. Nov 8 00:27:19.544724 systemd-logind[1962]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:27:19.549891 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:27:19.554085 systemd-logind[1962]: Removed session 22. Nov 8 00:27:21.688860 containerd[1986]: time="2025-11-08T00:27:21.688791556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:27:21.979497 containerd[1986]: time="2025-11-08T00:27:21.978754428Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:21.981046 containerd[1986]: time="2025-11-08T00:27:21.980840469Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:27:21.981046 containerd[1986]: time="2025-11-08T00:27:21.980945544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:27:21.981597 kubelet[3190]: E1108 00:27:21.981454 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:27:21.981597 kubelet[3190]: E1108 00:27:21.981532 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:27:21.989430 kubelet[3190]: E1108 00:27:21.987803 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5b76967f45-ch758_calico-system(ee745a66-da8b-4b06-b62f-77bdcb118c17): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:21.990953 containerd[1986]: time="2025-11-08T00:27:21.990654797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:27:22.300610 containerd[1986]: time="2025-11-08T00:27:22.300170593Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:22.302774 containerd[1986]: time="2025-11-08T00:27:22.302600072Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:27:22.302774 containerd[1986]: time="2025-11-08T00:27:22.302715090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:27:22.303952 kubelet[3190]: E1108 00:27:22.303183 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:27:22.303952 kubelet[3190]: E1108 00:27:22.303358 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:27:22.303952 kubelet[3190]: E1108 00:27:22.303474 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5b76967f45-ch758_calico-system(ee745a66-da8b-4b06-b62f-77bdcb118c17): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:22.304176 kubelet[3190]: E1108 00:27:22.303527 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b76967f45-ch758" podUID="ee745a66-da8b-4b06-b62f-77bdcb118c17" Nov 8 00:27:23.700512 containerd[1986]: time="2025-11-08T00:27:23.699789806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:27:23.982928 containerd[1986]: time="2025-11-08T00:27:23.982660988Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:23.991314 containerd[1986]: time="2025-11-08T00:27:23.984676881Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:27:23.991314 containerd[1986]: time="2025-11-08T00:27:23.984762711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:27:23.992594 kubelet[3190]: E1108 00:27:23.991617 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:27:23.992594 kubelet[3190]: E1108 00:27:23.991661 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:27:23.992594 kubelet[3190]: E1108 00:27:23.991821 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-756f78cd95-ppxpv_calico-system(2854d816-9155-4f6f-a8ba-78872a67ac8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:23.992594 kubelet[3190]: E1108 00:27:23.991854 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-756f78cd95-ppxpv" podUID="2854d816-9155-4f6f-a8ba-78872a67ac8c" Nov 8 00:27:23.993082 containerd[1986]: time="2025-11-08T00:27:23.992427013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:27:24.261090 containerd[1986]: time="2025-11-08T00:27:24.260960147Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:24.265052 containerd[1986]: time="2025-11-08T00:27:24.264944933Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:27:24.265052 containerd[1986]: time="2025-11-08T00:27:24.264987801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:27:24.265269 kubelet[3190]: E1108 00:27:24.265227 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:24.265337 kubelet[3190]: E1108 00:27:24.265272 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:24.265490 kubelet[3190]: E1108 00:27:24.265457 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d84f7c9c6-th4rp_calico-apiserver(7517a6de-bfae-458e-a17f-83662a231d90): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:24.265532 kubelet[3190]: E1108 00:27:24.265504 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-th4rp" podUID="7517a6de-bfae-458e-a17f-83662a231d90" Nov 8 00:27:24.266707 containerd[1986]: time="2025-11-08T00:27:24.266355308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:27:24.543558 containerd[1986]: time="2025-11-08T00:27:24.543418114Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:24.556325 containerd[1986]: time="2025-11-08T00:27:24.555720800Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:27:24.556325 containerd[1986]: time="2025-11-08T00:27:24.555861304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:27:24.556552 kubelet[3190]: E1108 00:27:24.556166 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:27:24.556552 kubelet[3190]: E1108 00:27:24.556220 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:27:24.556552 kubelet[3190]: E1108 00:27:24.556366 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-qb5jn_calico-system(59621f83-2f27-42e2-8c18-c119c79f6847): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:24.557583 kubelet[3190]: E1108 00:27:24.557513 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qb5jn" podUID="59621f83-2f27-42e2-8c18-c119c79f6847" Nov 8 00:27:24.587630 systemd[1]: Started sshd@22-172.31.25.121:22-139.178.89.65:35890.service - OpenSSH per-connection server daemon (139.178.89.65:35890). Nov 8 00:27:24.687801 containerd[1986]: time="2025-11-08T00:27:24.687756713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:27:24.804738 sshd[6309]: Accepted publickey for core from 139.178.89.65 port 35890 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:27:24.817582 sshd[6309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:24.824183 systemd-logind[1962]: New session 23 of user core. Nov 8 00:27:24.836539 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:27:24.981323 containerd[1986]: time="2025-11-08T00:27:24.981232327Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:24.985466 containerd[1986]: time="2025-11-08T00:27:24.985393013Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:27:24.985626 containerd[1986]: time="2025-11-08T00:27:24.985505822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:27:24.985777 kubelet[3190]: E1108 00:27:24.985731 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:24.985840 kubelet[3190]: E1108 00:27:24.985790 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:24.985915 kubelet[3190]: E1108 00:27:24.985892 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d84f7c9c6-r5rdl_calico-apiserver(36acaf38-ef21-4c55-a6b7-ba0516894e6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:24.985966 kubelet[3190]: E1108 00:27:24.985942 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-r5rdl" podUID="36acaf38-ef21-4c55-a6b7-ba0516894e6c" Nov 8 00:27:25.283044 sshd[6309]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:25.289851 systemd[1]: sshd@22-172.31.25.121:22-139.178.89.65:35890.service: Deactivated successfully. Nov 8 00:27:25.293732 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:27:25.297356 systemd-logind[1962]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:27:25.299418 systemd-logind[1962]: Removed session 23. Nov 8 00:27:30.324944 systemd[1]: Started sshd@23-172.31.25.121:22-139.178.89.65:50794.service - OpenSSH per-connection server daemon (139.178.89.65:50794). Nov 8 00:27:30.510468 sshd[6326]: Accepted publickey for core from 139.178.89.65 port 50794 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:27:30.514275 sshd[6326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:30.522784 systemd-logind[1962]: New session 24 of user core. Nov 8 00:27:30.527656 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:27:30.832506 sshd[6326]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:30.838842 systemd[1]: sshd@23-172.31.25.121:22-139.178.89.65:50794.service: Deactivated successfully. Nov 8 00:27:30.840845 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:27:30.841660 systemd-logind[1962]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:27:30.843214 systemd-logind[1962]: Removed session 24. Nov 8 00:27:32.688037 containerd[1986]: time="2025-11-08T00:27:32.687269170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:27:32.989350 containerd[1986]: time="2025-11-08T00:27:32.986922540Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:32.989836 containerd[1986]: time="2025-11-08T00:27:32.989782927Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:27:32.989970 containerd[1986]: time="2025-11-08T00:27:32.989893172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:27:32.990105 kubelet[3190]: E1108 00:27:32.990064 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:27:32.990621 kubelet[3190]: E1108 00:27:32.990120 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:27:32.990621 kubelet[3190]: E1108 00:27:32.990322 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-hcwvd_calico-system(543aa209-599c-4d8e-9da3-550061520690): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:32.992515 containerd[1986]: time="2025-11-08T00:27:32.992481158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:27:33.476533 containerd[1986]: time="2025-11-08T00:27:33.476473138Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:33.478666 containerd[1986]: time="2025-11-08T00:27:33.478606804Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:27:33.478806 containerd[1986]: time="2025-11-08T00:27:33.478733668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:27:33.478948 kubelet[3190]: E1108 00:27:33.478898 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:27:33.479020 kubelet[3190]: E1108 00:27:33.478954 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:27:33.479065 kubelet[3190]: E1108 00:27:33.479048 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-hcwvd_calico-system(543aa209-599c-4d8e-9da3-550061520690): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:33.479165 kubelet[3190]: E1108 00:27:33.479104 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hcwvd" podUID="543aa209-599c-4d8e-9da3-550061520690" Nov 8 00:27:34.689387 kubelet[3190]: E1108 00:27:34.689106 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b76967f45-ch758" podUID="ee745a66-da8b-4b06-b62f-77bdcb118c17" Nov 8 00:27:35.691590 kubelet[3190]: E1108 00:27:35.691542 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-th4rp" podUID="7517a6de-bfae-458e-a17f-83662a231d90" Nov 8 00:27:35.877028 systemd[1]: Started sshd@24-172.31.25.121:22-139.178.89.65:50804.service - OpenSSH per-connection server daemon (139.178.89.65:50804). Nov 8 00:27:36.069512 sshd[6341]: Accepted publickey for core from 139.178.89.65 port 50804 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:27:36.071892 sshd[6341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:36.080620 systemd-logind[1962]: New session 25 of user core. Nov 8 00:27:36.085134 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 8 00:27:36.339813 sshd[6341]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:36.347169 systemd[1]: sshd@24-172.31.25.121:22-139.178.89.65:50804.service: Deactivated successfully. Nov 8 00:27:36.347448 systemd-logind[1962]: Session 25 logged out. Waiting for processes to exit. Nov 8 00:27:36.353616 systemd[1]: session-25.scope: Deactivated successfully. Nov 8 00:27:36.356602 systemd-logind[1962]: Removed session 25. Nov 8 00:27:36.686136 kubelet[3190]: E1108 00:27:36.686079 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qb5jn" podUID="59621f83-2f27-42e2-8c18-c119c79f6847" Nov 8 00:27:37.687308 kubelet[3190]: E1108 00:27:37.686535 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-r5rdl" podUID="36acaf38-ef21-4c55-a6b7-ba0516894e6c" Nov 8 00:27:38.686593 kubelet[3190]: E1108 00:27:38.685923 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-756f78cd95-ppxpv" podUID="2854d816-9155-4f6f-a8ba-78872a67ac8c" Nov 8 00:27:41.374001 systemd[1]: Started sshd@25-172.31.25.121:22-139.178.89.65:53172.service - OpenSSH per-connection server daemon (139.178.89.65:53172). Nov 8 00:27:41.584276 sshd[6375]: Accepted publickey for core from 139.178.89.65 port 53172 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:27:41.586577 sshd[6375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:41.593044 systemd-logind[1962]: New session 26 of user core. Nov 8 00:27:41.599583 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 8 00:27:42.251579 sshd[6375]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:42.258325 systemd-logind[1962]: Session 26 logged out. Waiting for processes to exit. Nov 8 00:27:42.260905 systemd[1]: sshd@25-172.31.25.121:22-139.178.89.65:53172.service: Deactivated successfully. Nov 8 00:27:42.266929 systemd[1]: session-26.scope: Deactivated successfully. Nov 8 00:27:42.271560 systemd-logind[1962]: Removed session 26. Nov 8 00:27:46.687873 kubelet[3190]: E1108 00:27:46.687757 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b76967f45-ch758" podUID="ee745a66-da8b-4b06-b62f-77bdcb118c17" Nov 8 00:27:46.690657 kubelet[3190]: E1108 00:27:46.690610 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hcwvd" podUID="543aa209-599c-4d8e-9da3-550061520690" Nov 8 00:27:47.290677 systemd[1]: Started sshd@26-172.31.25.121:22-139.178.89.65:36002.service - OpenSSH per-connection server daemon (139.178.89.65:36002). Nov 8 00:27:47.464314 sshd[6389]: Accepted publickey for core from 139.178.89.65 port 36002 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:27:47.467134 sshd[6389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:47.473833 systemd-logind[1962]: New session 27 of user core. Nov 8 00:27:47.480508 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 8 00:27:47.688356 kubelet[3190]: E1108 00:27:47.687605 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qb5jn" podUID="59621f83-2f27-42e2-8c18-c119c79f6847" Nov 8 00:27:47.913679 sshd[6389]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:47.921925 systemd[1]: sshd@26-172.31.25.121:22-139.178.89.65:36002.service: Deactivated successfully. Nov 8 00:27:47.928519 systemd[1]: session-27.scope: Deactivated successfully. Nov 8 00:27:47.930011 systemd-logind[1962]: Session 27 logged out. Waiting for processes to exit. Nov 8 00:27:47.934832 systemd-logind[1962]: Removed session 27. Nov 8 00:27:48.685104 kubelet[3190]: E1108 00:27:48.684993 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-th4rp" podUID="7517a6de-bfae-458e-a17f-83662a231d90" Nov 8 00:27:51.688536 kubelet[3190]: E1108 00:27:51.688486 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-756f78cd95-ppxpv" podUID="2854d816-9155-4f6f-a8ba-78872a67ac8c" Nov 8 00:27:52.685731 kubelet[3190]: E1108 00:27:52.685629 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-r5rdl" podUID="36acaf38-ef21-4c55-a6b7-ba0516894e6c" Nov 8 00:27:58.685136 kubelet[3190]: E1108 00:27:58.685083 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b76967f45-ch758" podUID="ee745a66-da8b-4b06-b62f-77bdcb118c17" Nov 8 00:27:59.685334 kubelet[3190]: E1108 00:27:59.685153 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-th4rp" podUID="7517a6de-bfae-458e-a17f-83662a231d90" Nov 8 00:28:01.745473 kubelet[3190]: E1108 00:28:01.745414 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hcwvd" podUID="543aa209-599c-4d8e-9da3-550061520690" Nov 8 00:28:02.649964 systemd[1]: cri-containerd-1119a806dbc11a0455c3ee113d3894e7c8ae1c9b59342583871cdc6ce48d35df.scope: Deactivated successfully. Nov 8 00:28:02.650400 systemd[1]: cri-containerd-1119a806dbc11a0455c3ee113d3894e7c8ae1c9b59342583871cdc6ce48d35df.scope: Consumed 3.658s CPU time, 28.1M memory peak, 0B memory swap peak. Nov 8 00:28:02.694138 kubelet[3190]: E1108 00:28:02.693552 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qb5jn" podUID="59621f83-2f27-42e2-8c18-c119c79f6847" Nov 8 00:28:02.865261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1119a806dbc11a0455c3ee113d3894e7c8ae1c9b59342583871cdc6ce48d35df-rootfs.mount: Deactivated successfully. Nov 8 00:28:02.925213 containerd[1986]: time="2025-11-08T00:28:02.915664985Z" level=info msg="shim disconnected" id=1119a806dbc11a0455c3ee113d3894e7c8ae1c9b59342583871cdc6ce48d35df namespace=k8s.io Nov 8 00:28:02.945486 containerd[1986]: time="2025-11-08T00:28:02.945426213Z" level=warning msg="cleaning up after shim disconnected" id=1119a806dbc11a0455c3ee113d3894e7c8ae1c9b59342583871cdc6ce48d35df namespace=k8s.io Nov 8 00:28:02.945486 containerd[1986]: time="2025-11-08T00:28:02.945474277Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:28:03.357097 systemd[1]: cri-containerd-6c64a388bc5650f1366e14832b87487ccbf1d6e0f7487942d7779cc8b40707ac.scope: Deactivated successfully. Nov 8 00:28:03.357626 systemd[1]: cri-containerd-6c64a388bc5650f1366e14832b87487ccbf1d6e0f7487942d7779cc8b40707ac.scope: Consumed 14.003s CPU time. Nov 8 00:28:03.388503 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c64a388bc5650f1366e14832b87487ccbf1d6e0f7487942d7779cc8b40707ac-rootfs.mount: Deactivated successfully. Nov 8 00:28:03.400430 containerd[1986]: time="2025-11-08T00:28:03.400355522Z" level=info msg="shim disconnected" id=6c64a388bc5650f1366e14832b87487ccbf1d6e0f7487942d7779cc8b40707ac namespace=k8s.io Nov 8 00:28:03.400430 containerd[1986]: time="2025-11-08T00:28:03.400423679Z" level=warning msg="cleaning up after shim disconnected" id=6c64a388bc5650f1366e14832b87487ccbf1d6e0f7487942d7779cc8b40707ac namespace=k8s.io Nov 8 00:28:03.400430 containerd[1986]: time="2025-11-08T00:28:03.400435695Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:28:03.664861 kubelet[3190]: I1108 00:28:03.664802 3190 scope.go:117] "RemoveContainer" containerID="1119a806dbc11a0455c3ee113d3894e7c8ae1c9b59342583871cdc6ce48d35df" Nov 8 00:28:03.665278 kubelet[3190]: I1108 00:28:03.665150 3190 scope.go:117] "RemoveContainer" containerID="6c64a388bc5650f1366e14832b87487ccbf1d6e0f7487942d7779cc8b40707ac" Nov 8 00:28:03.709172 kubelet[3190]: E1108 00:28:03.709094 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-756f78cd95-ppxpv" podUID="2854d816-9155-4f6f-a8ba-78872a67ac8c" Nov 8 00:28:03.709596 containerd[1986]: time="2025-11-08T00:28:03.709548543Z" level=info msg="CreateContainer within sandbox \"629b45647cc9c0fd45636fe8c94813df70e00d2d804c6ba42537e7e5561a7fd6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 8 00:28:03.709835 containerd[1986]: time="2025-11-08T00:28:03.709802018Z" level=info msg="CreateContainer within sandbox \"9851bfdabb46d918b6f5859dcc82e483c8ea539071813636f63d067127cbc39b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 8 00:28:03.759864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount893255342.mount: Deactivated successfully. Nov 8 00:28:03.780802 containerd[1986]: time="2025-11-08T00:28:03.780745957Z" level=info msg="CreateContainer within sandbox \"629b45647cc9c0fd45636fe8c94813df70e00d2d804c6ba42537e7e5561a7fd6\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"2cd08cfefbc9fc1bdc71c561c4005dc78adc44e85e9e69b9f5ab47cc47e16746\"" Nov 8 00:28:03.783269 containerd[1986]: time="2025-11-08T00:28:03.783214440Z" level=info msg="CreateContainer within sandbox \"9851bfdabb46d918b6f5859dcc82e483c8ea539071813636f63d067127cbc39b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"53e1535b0026125214cff8ee406089f2eea5f6c6dc9752572a157027d5b92562\"" Nov 8 00:28:03.786535 containerd[1986]: time="2025-11-08T00:28:03.786495983Z" level=info msg="StartContainer for \"2cd08cfefbc9fc1bdc71c561c4005dc78adc44e85e9e69b9f5ab47cc47e16746\"" Nov 8 00:28:03.789305 containerd[1986]: time="2025-11-08T00:28:03.787602656Z" level=info msg="StartContainer for \"53e1535b0026125214cff8ee406089f2eea5f6c6dc9752572a157027d5b92562\"" Nov 8 00:28:03.837526 systemd[1]: Started cri-containerd-53e1535b0026125214cff8ee406089f2eea5f6c6dc9752572a157027d5b92562.scope - libcontainer container 53e1535b0026125214cff8ee406089f2eea5f6c6dc9752572a157027d5b92562. Nov 8 00:28:03.848546 systemd[1]: Started cri-containerd-2cd08cfefbc9fc1bdc71c561c4005dc78adc44e85e9e69b9f5ab47cc47e16746.scope - libcontainer container 2cd08cfefbc9fc1bdc71c561c4005dc78adc44e85e9e69b9f5ab47cc47e16746. Nov 8 00:28:03.961875 containerd[1986]: time="2025-11-08T00:28:03.961681234Z" level=info msg="StartContainer for \"2cd08cfefbc9fc1bdc71c561c4005dc78adc44e85e9e69b9f5ab47cc47e16746\" returns successfully" Nov 8 00:28:03.974407 containerd[1986]: time="2025-11-08T00:28:03.973988715Z" level=info msg="StartContainer for \"53e1535b0026125214cff8ee406089f2eea5f6c6dc9752572a157027d5b92562\" returns successfully" Nov 8 00:28:05.685654 containerd[1986]: time="2025-11-08T00:28:05.685352153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:28:05.749910 kubelet[3190]: E1108 00:28:05.749858 3190 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-25-121)" Nov 8 00:28:06.129902 containerd[1986]: time="2025-11-08T00:28:06.129841057Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:06.132154 containerd[1986]: time="2025-11-08T00:28:06.132082356Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:28:06.132308 containerd[1986]: time="2025-11-08T00:28:06.132169577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:28:06.132520 kubelet[3190]: E1108 00:28:06.132479 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:28:06.132581 kubelet[3190]: E1108 00:28:06.132527 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:28:06.132630 kubelet[3190]: E1108 00:28:06.132612 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d84f7c9c6-r5rdl_calico-apiserver(36acaf38-ef21-4c55-a6b7-ba0516894e6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:06.132683 kubelet[3190]: E1108 00:28:06.132647 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-r5rdl" podUID="36acaf38-ef21-4c55-a6b7-ba0516894e6c" Nov 8 00:28:08.461661 systemd[1]: cri-containerd-ea57a97d14d12bcac2e223fd10ebe7816dc6cf015286f1aa51b1c8ede8166401.scope: Deactivated successfully. Nov 8 00:28:08.461952 systemd[1]: cri-containerd-ea57a97d14d12bcac2e223fd10ebe7816dc6cf015286f1aa51b1c8ede8166401.scope: Consumed 3.018s CPU time, 18.0M memory peak, 0B memory swap peak. Nov 8 00:28:08.489590 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea57a97d14d12bcac2e223fd10ebe7816dc6cf015286f1aa51b1c8ede8166401-rootfs.mount: Deactivated successfully. Nov 8 00:28:08.516788 containerd[1986]: time="2025-11-08T00:28:08.516725266Z" level=info msg="shim disconnected" id=ea57a97d14d12bcac2e223fd10ebe7816dc6cf015286f1aa51b1c8ede8166401 namespace=k8s.io Nov 8 00:28:08.516788 containerd[1986]: time="2025-11-08T00:28:08.516777920Z" level=warning msg="cleaning up after shim disconnected" id=ea57a97d14d12bcac2e223fd10ebe7816dc6cf015286f1aa51b1c8ede8166401 namespace=k8s.io Nov 8 00:28:08.516788 containerd[1986]: time="2025-11-08T00:28:08.516786767Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:28:08.532952 containerd[1986]: time="2025-11-08T00:28:08.532891337Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:28:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:28:08.710231 kubelet[3190]: I1108 00:28:08.710187 3190 scope.go:117] "RemoveContainer" containerID="ea57a97d14d12bcac2e223fd10ebe7816dc6cf015286f1aa51b1c8ede8166401" Nov 8 00:28:08.713073 containerd[1986]: time="2025-11-08T00:28:08.712832255Z" level=info msg="CreateContainer within sandbox \"db9ef81e4e041679bb36c851b96752c8f6b887a9130ff539ccc19eb7cf2609b8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 8 00:28:08.735729 containerd[1986]: time="2025-11-08T00:28:08.735678032Z" level=info msg="CreateContainer within sandbox \"db9ef81e4e041679bb36c851b96752c8f6b887a9130ff539ccc19eb7cf2609b8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"164930745b8f5a4c3746c979a0c7dabd33f81e007a1e3edfd56cd7b95d870b13\"" Nov 8 00:28:08.737329 containerd[1986]: time="2025-11-08T00:28:08.736239074Z" level=info msg="StartContainer for \"164930745b8f5a4c3746c979a0c7dabd33f81e007a1e3edfd56cd7b95d870b13\"" Nov 8 00:28:08.780541 systemd[1]: Started cri-containerd-164930745b8f5a4c3746c979a0c7dabd33f81e007a1e3edfd56cd7b95d870b13.scope - libcontainer container 164930745b8f5a4c3746c979a0c7dabd33f81e007a1e3edfd56cd7b95d870b13. Nov 8 00:28:08.831890 containerd[1986]: time="2025-11-08T00:28:08.831831628Z" level=info msg="StartContainer for \"164930745b8f5a4c3746c979a0c7dabd33f81e007a1e3edfd56cd7b95d870b13\" returns successfully" Nov 8 00:28:10.685258 containerd[1986]: time="2025-11-08T00:28:10.685220333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:28:10.993454 containerd[1986]: time="2025-11-08T00:28:10.993261345Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:10.995650 containerd[1986]: time="2025-11-08T00:28:10.995518118Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:28:10.995650 containerd[1986]: time="2025-11-08T00:28:10.995599244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:28:10.995823 kubelet[3190]: E1108 00:28:10.995746 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:28:10.995823 kubelet[3190]: E1108 00:28:10.995789 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:28:10.996155 kubelet[3190]: E1108 00:28:10.995859 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5d84f7c9c6-th4rp_calico-apiserver(7517a6de-bfae-458e-a17f-83662a231d90): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:10.996155 kubelet[3190]: E1108 00:28:10.995888 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-th4rp" podUID="7517a6de-bfae-458e-a17f-83662a231d90" Nov 8 00:28:13.686085 containerd[1986]: time="2025-11-08T00:28:13.686011362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:28:13.978082 containerd[1986]: time="2025-11-08T00:28:13.977954044Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:13.980551 containerd[1986]: time="2025-11-08T00:28:13.980440348Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:28:13.980551 containerd[1986]: time="2025-11-08T00:28:13.980492666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:28:13.981377 kubelet[3190]: E1108 00:28:13.980837 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:28:13.981377 kubelet[3190]: E1108 00:28:13.980886 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:28:13.981377 kubelet[3190]: E1108 00:28:13.980965 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5b76967f45-ch758_calico-system(ee745a66-da8b-4b06-b62f-77bdcb118c17): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:13.981960 containerd[1986]: time="2025-11-08T00:28:13.981906343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:28:14.258930 containerd[1986]: time="2025-11-08T00:28:14.258790707Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:14.261061 containerd[1986]: time="2025-11-08T00:28:14.260935171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:28:14.261061 containerd[1986]: time="2025-11-08T00:28:14.261014403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:28:14.261215 kubelet[3190]: E1108 00:28:14.261135 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:28:14.261215 kubelet[3190]: E1108 00:28:14.261181 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:28:14.261296 kubelet[3190]: E1108 00:28:14.261265 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5b76967f45-ch758_calico-system(ee745a66-da8b-4b06-b62f-77bdcb118c17): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:14.261425 kubelet[3190]: E1108 00:28:14.261317 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b76967f45-ch758" podUID="ee745a66-da8b-4b06-b62f-77bdcb118c17" Nov 8 00:28:15.520387 systemd[1]: cri-containerd-2cd08cfefbc9fc1bdc71c561c4005dc78adc44e85e9e69b9f5ab47cc47e16746.scope: Deactivated successfully. Nov 8 00:28:15.548796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cd08cfefbc9fc1bdc71c561c4005dc78adc44e85e9e69b9f5ab47cc47e16746-rootfs.mount: Deactivated successfully. Nov 8 00:28:15.573188 containerd[1986]: time="2025-11-08T00:28:15.573131334Z" level=info msg="shim disconnected" id=2cd08cfefbc9fc1bdc71c561c4005dc78adc44e85e9e69b9f5ab47cc47e16746 namespace=k8s.io Nov 8 00:28:15.573188 containerd[1986]: time="2025-11-08T00:28:15.573181604Z" level=warning msg="cleaning up after shim disconnected" id=2cd08cfefbc9fc1bdc71c561c4005dc78adc44e85e9e69b9f5ab47cc47e16746 namespace=k8s.io Nov 8 00:28:15.573188 containerd[1986]: time="2025-11-08T00:28:15.573190184Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:28:15.686025 containerd[1986]: time="2025-11-08T00:28:15.685909953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:28:15.745771 kubelet[3190]: I1108 00:28:15.734209 3190 scope.go:117] "RemoveContainer" containerID="6c64a388bc5650f1366e14832b87487ccbf1d6e0f7487942d7779cc8b40707ac" Nov 8 00:28:15.746374 kubelet[3190]: I1108 00:28:15.745944 3190 scope.go:117] "RemoveContainer" containerID="2cd08cfefbc9fc1bdc71c561c4005dc78adc44e85e9e69b9f5ab47cc47e16746" Nov 8 00:28:15.746374 kubelet[3190]: E1108 00:28:15.746176 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-65cdcdfd6d-v7m75_tigera-operator(ce15cdf6-d79a-45c3-b348-04df18c498e8)\"" pod="tigera-operator/tigera-operator-65cdcdfd6d-v7m75" podUID="ce15cdf6-d79a-45c3-b348-04df18c498e8" Nov 8 00:28:15.760545 kubelet[3190]: E1108 00:28:15.760435 3190 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-121?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 8 00:28:15.775592 containerd[1986]: time="2025-11-08T00:28:15.775469228Z" level=info msg="RemoveContainer for \"6c64a388bc5650f1366e14832b87487ccbf1d6e0f7487942d7779cc8b40707ac\"" Nov 8 00:28:15.781355 containerd[1986]: time="2025-11-08T00:28:15.781157680Z" level=info msg="RemoveContainer for \"6c64a388bc5650f1366e14832b87487ccbf1d6e0f7487942d7779cc8b40707ac\" returns successfully" Nov 8 00:28:15.968144 containerd[1986]: time="2025-11-08T00:28:15.968085511Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:15.970357 containerd[1986]: time="2025-11-08T00:28:15.970203007Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:28:15.970357 containerd[1986]: time="2025-11-08T00:28:15.970253430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:28:15.970615 kubelet[3190]: E1108 00:28:15.970563 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:28:15.970769 kubelet[3190]: E1108 00:28:15.970621 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:28:15.971439 kubelet[3190]: E1108 00:28:15.970909 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-qb5jn_calico-system(59621f83-2f27-42e2-8c18-c119c79f6847): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:15.971439 kubelet[3190]: E1108 00:28:15.971017 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qb5jn" podUID="59621f83-2f27-42e2-8c18-c119c79f6847" Nov 8 00:28:15.971586 containerd[1986]: time="2025-11-08T00:28:15.971114783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:28:16.266095 containerd[1986]: time="2025-11-08T00:28:16.266052590Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:16.268206 containerd[1986]: time="2025-11-08T00:28:16.268157396Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:28:16.268360 containerd[1986]: time="2025-11-08T00:28:16.268244252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:28:16.268480 kubelet[3190]: E1108 00:28:16.268437 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:28:16.268556 kubelet[3190]: E1108 00:28:16.268486 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:28:16.268598 kubelet[3190]: E1108 00:28:16.268554 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-756f78cd95-ppxpv_calico-system(2854d816-9155-4f6f-a8ba-78872a67ac8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:16.268669 kubelet[3190]: E1108 00:28:16.268586 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-756f78cd95-ppxpv" podUID="2854d816-9155-4f6f-a8ba-78872a67ac8c" Nov 8 00:28:16.684882 containerd[1986]: time="2025-11-08T00:28:16.684835618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:28:16.968682 containerd[1986]: time="2025-11-08T00:28:16.968476609Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:16.970675 containerd[1986]: time="2025-11-08T00:28:16.970599018Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:28:16.970963 containerd[1986]: time="2025-11-08T00:28:16.970710119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:28:16.971073 kubelet[3190]: E1108 00:28:16.970922 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:28:16.971073 kubelet[3190]: E1108 00:28:16.970977 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:28:16.971073 kubelet[3190]: E1108 00:28:16.971089 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-hcwvd_calico-system(543aa209-599c-4d8e-9da3-550061520690): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:16.972727 containerd[1986]: time="2025-11-08T00:28:16.972455149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:28:17.246951 containerd[1986]: time="2025-11-08T00:28:17.246819259Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:17.249016 containerd[1986]: time="2025-11-08T00:28:17.248933069Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:28:17.249177 containerd[1986]: time="2025-11-08T00:28:17.249035141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:28:17.249231 kubelet[3190]: E1108 00:28:17.249195 3190 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:28:17.249307 kubelet[3190]: E1108 00:28:17.249236 3190 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:28:17.249343 kubelet[3190]: E1108 00:28:17.249318 3190 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-hcwvd_calico-system(543aa209-599c-4d8e-9da3-550061520690): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:17.249409 kubelet[3190]: E1108 00:28:17.249356 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hcwvd" podUID="543aa209-599c-4d8e-9da3-550061520690" Nov 8 00:28:20.685447 kubelet[3190]: E1108 00:28:20.685396 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-r5rdl" podUID="36acaf38-ef21-4c55-a6b7-ba0516894e6c" Nov 8 00:28:21.684949 kubelet[3190]: E1108 00:28:21.684888 3190 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d84f7c9c6-th4rp" podUID="7517a6de-bfae-458e-a17f-83662a231d90"