Nov 8 00:33:58.924436 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:33:58.924464 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:33:58.924477 kernel: BIOS-provided physical RAM map: Nov 8 00:33:58.924484 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 8 00:33:58.924490 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Nov 8 00:33:58.924497 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Nov 8 00:33:58.924505 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Nov 8 00:33:58.924512 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Nov 8 00:33:58.924519 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Nov 8 00:33:58.924528 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Nov 8 00:33:58.924535 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Nov 8 00:33:58.924542 kernel: NX (Execute Disable) protection: active Nov 8 00:33:58.924549 kernel: APIC: Static calls initialized Nov 8 00:33:58.924556 kernel: efi: EFI v2.7 by EDK II Nov 8 00:33:58.924565 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77002518 Nov 8 00:33:58.924575 kernel: SMBIOS 2.7 present. Nov 8 00:33:58.924583 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Nov 8 00:33:58.924591 kernel: Hypervisor detected: KVM Nov 8 00:33:58.924598 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:33:58.924606 kernel: kvm-clock: using sched offset of 3687931587 cycles Nov 8 00:33:58.924615 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:33:58.924622 kernel: tsc: Detected 2499.998 MHz processor Nov 8 00:33:58.924631 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:33:58.924639 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:33:58.924646 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Nov 8 00:33:58.924656 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 8 00:33:58.924664 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:33:58.924672 kernel: Using GB pages for direct mapping Nov 8 00:33:58.924680 kernel: Secure boot disabled Nov 8 00:33:58.924687 kernel: ACPI: Early table checksum verification disabled Nov 8 00:33:58.924695 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Nov 8 00:33:58.924702 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Nov 8 00:33:58.924710 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 8 00:33:58.924718 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Nov 8 00:33:58.924728 kernel: ACPI: FACS 0x00000000789D0000 000040 Nov 8 00:33:58.924736 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Nov 8 00:33:58.924743 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 8 00:33:58.924751 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 8 00:33:58.924759 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Nov 8 00:33:58.924767 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Nov 8 00:33:58.924778 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 8 00:33:58.924789 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 8 00:33:58.924797 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Nov 8 00:33:58.924805 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Nov 8 00:33:58.924813 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Nov 8 00:33:58.924822 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Nov 8 00:33:58.924830 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Nov 8 00:33:58.924840 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Nov 8 00:33:58.924848 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Nov 8 00:33:58.924857 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Nov 8 00:33:58.924865 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Nov 8 00:33:58.924873 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Nov 8 00:33:58.924881 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Nov 8 00:33:58.924889 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Nov 8 00:33:58.924897 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 8 00:33:58.924905 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 8 00:33:58.924914 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Nov 8 00:33:58.924924 kernel: NUMA: Initialized distance table, cnt=1 Nov 8 00:33:58.924932 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Nov 8 00:33:58.924940 kernel: Zone ranges: Nov 8 00:33:58.924948 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:33:58.924956 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Nov 8 00:33:58.924965 kernel: Normal empty Nov 8 00:33:58.924973 kernel: Movable zone start for each node Nov 8 00:33:58.924981 kernel: Early memory node ranges Nov 8 00:33:58.924989 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 8 00:33:58.924999 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Nov 8 00:33:58.925007 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Nov 8 00:33:58.925016 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Nov 8 00:33:58.925024 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:33:58.925032 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 8 00:33:58.925041 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 8 00:33:58.925049 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Nov 8 00:33:58.925057 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 8 00:33:58.925065 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:33:58.925076 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Nov 8 00:33:58.925084 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:33:58.925092 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:33:58.925101 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:33:58.925109 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:33:58.925117 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:33:58.925125 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:33:58.925133 kernel: TSC deadline timer available Nov 8 00:33:58.925142 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:33:58.925150 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:33:58.925161 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Nov 8 00:33:58.925169 kernel: Booting paravirtualized kernel on KVM Nov 8 00:33:58.925177 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:33:58.925185 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:33:58.925193 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:33:58.925202 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:33:58.925209 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:33:58.925217 kernel: kvm-guest: PV spinlocks enabled Nov 8 00:33:58.925225 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:33:58.925237 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:33:58.925246 kernel: random: crng init done Nov 8 00:33:58.925254 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:33:58.925262 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:33:58.925270 kernel: Fallback order for Node 0: 0 Nov 8 00:33:58.925278 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Nov 8 00:33:58.925287 kernel: Policy zone: DMA32 Nov 8 00:33:58.925295 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:33:58.925306 kernel: Memory: 1874604K/2037804K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 162940K reserved, 0K cma-reserved) Nov 8 00:33:58.925314 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:33:58.925323 kernel: Kernel/User page tables isolation: enabled Nov 8 00:33:58.925331 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:33:58.925339 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:33:58.925347 kernel: Dynamic Preempt: voluntary Nov 8 00:33:58.925355 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:33:58.925378 kernel: rcu: RCU event tracing is enabled. Nov 8 00:33:58.925387 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:33:58.925398 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:33:58.925406 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:33:58.925414 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:33:58.925423 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:33:58.925431 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:33:58.925439 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 8 00:33:58.925447 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:33:58.925466 kernel: Console: colour dummy device 80x25 Nov 8 00:33:58.925475 kernel: printk: console [tty0] enabled Nov 8 00:33:58.925484 kernel: printk: console [ttyS0] enabled Nov 8 00:33:58.925493 kernel: ACPI: Core revision 20230628 Nov 8 00:33:58.925501 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Nov 8 00:33:58.925513 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:33:58.925521 kernel: x2apic enabled Nov 8 00:33:58.925530 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:33:58.925539 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Nov 8 00:33:58.925558 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Nov 8 00:33:58.925575 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 8 00:33:58.925588 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 8 00:33:58.925600 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:33:58.925611 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:33:58.925620 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:33:58.925629 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 8 00:33:58.925638 kernel: RETBleed: Vulnerable Nov 8 00:33:58.925646 kernel: Speculative Store Bypass: Vulnerable Nov 8 00:33:58.925655 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:33:58.925664 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:33:58.925675 kernel: GDS: Unknown: Dependent on hypervisor status Nov 8 00:33:58.925684 kernel: active return thunk: its_return_thunk Nov 8 00:33:58.925693 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:33:58.925701 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:33:58.925710 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:33:58.925719 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:33:58.925728 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 8 00:33:58.925736 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 8 00:33:58.925745 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 8 00:33:58.925754 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 8 00:33:58.925762 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 8 00:33:58.925774 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 8 00:33:58.925782 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:33:58.925791 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 8 00:33:58.925800 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 8 00:33:58.925820 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Nov 8 00:33:58.925829 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Nov 8 00:33:58.925838 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Nov 8 00:33:58.925847 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Nov 8 00:33:58.925855 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Nov 8 00:33:58.925864 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:33:58.925873 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:33:58.925885 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:33:58.925893 kernel: landlock: Up and running. Nov 8 00:33:58.925903 kernel: SELinux: Initializing. Nov 8 00:33:58.925911 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:33:58.925920 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:33:58.925929 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 8 00:33:58.925938 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:33:58.925947 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:33:58.925956 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:33:58.925965 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 8 00:33:58.925976 kernel: signal: max sigframe size: 3632 Nov 8 00:33:58.925985 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:33:58.925994 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:33:58.926003 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 8 00:33:58.926011 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:33:58.926020 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:33:58.926029 kernel: .... node #0, CPUs: #1 Nov 8 00:33:58.926038 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 8 00:33:58.926048 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 8 00:33:58.926059 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:33:58.926068 kernel: smpboot: Max logical packages: 1 Nov 8 00:33:58.926077 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Nov 8 00:33:58.926085 kernel: devtmpfs: initialized Nov 8 00:33:58.926094 kernel: x86/mm: Memory block size: 128MB Nov 8 00:33:58.926103 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Nov 8 00:33:58.926112 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:33:58.926121 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:33:58.926130 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:33:58.926141 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:33:58.926150 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:33:58.926159 kernel: audit: type=2000 audit(1762562039.351:1): state=initialized audit_enabled=0 res=1 Nov 8 00:33:58.926167 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:33:58.926176 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:33:58.926185 kernel: cpuidle: using governor menu Nov 8 00:33:58.926194 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:33:58.926203 kernel: dca service started, version 1.12.1 Nov 8 00:33:58.926211 kernel: PCI: Using configuration type 1 for base access Nov 8 00:33:58.926223 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:33:58.926232 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:33:58.926241 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:33:58.926249 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:33:58.926259 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:33:58.926267 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:33:58.926276 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:33:58.926285 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:33:58.926294 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 8 00:33:58.926305 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:33:58.926314 kernel: ACPI: Interpreter enabled Nov 8 00:33:58.926322 kernel: ACPI: PM: (supports S0 S5) Nov 8 00:33:58.926331 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:33:58.926340 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:33:58.926349 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:33:58.926357 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 8 00:33:58.926378 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:33:58.926552 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:33:58.926658 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 8 00:33:58.926751 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 8 00:33:58.926762 kernel: acpiphp: Slot [3] registered Nov 8 00:33:58.926772 kernel: acpiphp: Slot [4] registered Nov 8 00:33:58.926780 kernel: acpiphp: Slot [5] registered Nov 8 00:33:58.926789 kernel: acpiphp: Slot [6] registered Nov 8 00:33:58.926798 kernel: acpiphp: Slot [7] registered Nov 8 00:33:58.926810 kernel: acpiphp: Slot [8] registered Nov 8 00:33:58.926818 kernel: acpiphp: Slot [9] registered Nov 8 00:33:58.926827 kernel: acpiphp: Slot [10] registered Nov 8 00:33:58.926836 kernel: acpiphp: Slot [11] registered Nov 8 00:33:58.926845 kernel: acpiphp: Slot [12] registered Nov 8 00:33:58.926854 kernel: acpiphp: Slot [13] registered Nov 8 00:33:58.926863 kernel: acpiphp: Slot [14] registered Nov 8 00:33:58.926872 kernel: acpiphp: Slot [15] registered Nov 8 00:33:58.926880 kernel: acpiphp: Slot [16] registered Nov 8 00:33:58.926889 kernel: acpiphp: Slot [17] registered Nov 8 00:33:58.926901 kernel: acpiphp: Slot [18] registered Nov 8 00:33:58.926910 kernel: acpiphp: Slot [19] registered Nov 8 00:33:58.926918 kernel: acpiphp: Slot [20] registered Nov 8 00:33:58.926927 kernel: acpiphp: Slot [21] registered Nov 8 00:33:58.926936 kernel: acpiphp: Slot [22] registered Nov 8 00:33:58.926944 kernel: acpiphp: Slot [23] registered Nov 8 00:33:58.926953 kernel: acpiphp: Slot [24] registered Nov 8 00:33:58.926962 kernel: acpiphp: Slot [25] registered Nov 8 00:33:58.926970 kernel: acpiphp: Slot [26] registered Nov 8 00:33:58.926982 kernel: acpiphp: Slot [27] registered Nov 8 00:33:58.926990 kernel: acpiphp: Slot [28] registered Nov 8 00:33:58.926999 kernel: acpiphp: Slot [29] registered Nov 8 00:33:58.927008 kernel: acpiphp: Slot [30] registered Nov 8 00:33:58.927016 kernel: acpiphp: Slot [31] registered Nov 8 00:33:58.927025 kernel: PCI host bridge to bus 0000:00 Nov 8 00:33:58.927121 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:33:58.927208 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:33:58.927296 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:33:58.927392 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 8 00:33:58.927478 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Nov 8 00:33:58.927562 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:33:58.927668 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 8 00:33:58.927769 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 8 00:33:58.927870 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Nov 8 00:33:58.927970 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 8 00:33:58.928062 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Nov 8 00:33:58.928154 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Nov 8 00:33:58.928247 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Nov 8 00:33:58.928340 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Nov 8 00:33:58.928454 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Nov 8 00:33:58.928547 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Nov 8 00:33:58.928675 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Nov 8 00:33:58.928771 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Nov 8 00:33:58.928863 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Nov 8 00:33:58.928955 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Nov 8 00:33:58.929048 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:33:58.929146 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Nov 8 00:33:58.929243 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Nov 8 00:33:58.929341 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Nov 8 00:33:58.931578 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Nov 8 00:33:58.931605 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:33:58.931615 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:33:58.931625 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:33:58.931634 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:33:58.931643 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 8 00:33:58.931657 kernel: iommu: Default domain type: Translated Nov 8 00:33:58.931666 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:33:58.931675 kernel: efivars: Registered efivars operations Nov 8 00:33:58.931684 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:33:58.931693 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:33:58.931703 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Nov 8 00:33:58.931712 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Nov 8 00:33:58.931815 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Nov 8 00:33:58.931910 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Nov 8 00:33:58.932008 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:33:58.932020 kernel: vgaarb: loaded Nov 8 00:33:58.932030 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Nov 8 00:33:58.932039 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Nov 8 00:33:58.932048 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:33:58.932057 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:33:58.932066 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:33:58.932075 kernel: pnp: PnP ACPI init Nov 8 00:33:58.932087 kernel: pnp: PnP ACPI: found 5 devices Nov 8 00:33:58.932096 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:33:58.932105 kernel: NET: Registered PF_INET protocol family Nov 8 00:33:58.932114 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:33:58.932123 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 8 00:33:58.932132 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:33:58.932141 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:33:58.932157 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 00:33:58.932167 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 8 00:33:58.932179 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:33:58.932188 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:33:58.932196 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:33:58.932205 kernel: NET: Registered PF_XDP protocol family Nov 8 00:33:58.932301 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:33:58.933486 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:33:58.933618 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:33:58.933708 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 8 00:33:58.933791 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Nov 8 00:33:58.933900 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 8 00:33:58.933913 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:33:58.933923 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 8 00:33:58.933932 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Nov 8 00:33:58.933942 kernel: clocksource: Switched to clocksource tsc Nov 8 00:33:58.933951 kernel: Initialise system trusted keyrings Nov 8 00:33:58.933960 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 8 00:33:58.933969 kernel: Key type asymmetric registered Nov 8 00:33:58.933981 kernel: Asymmetric key parser 'x509' registered Nov 8 00:33:58.933990 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:33:58.933999 kernel: io scheduler mq-deadline registered Nov 8 00:33:58.934008 kernel: io scheduler kyber registered Nov 8 00:33:58.934017 kernel: io scheduler bfq registered Nov 8 00:33:58.934026 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:33:58.934035 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:33:58.934044 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:33:58.934053 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:33:58.934065 kernel: i8042: Warning: Keylock active Nov 8 00:33:58.934074 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:33:58.934083 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:33:58.934204 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 8 00:33:58.934294 kernel: rtc_cmos 00:00: registered as rtc0 Nov 8 00:33:58.935468 kernel: rtc_cmos 00:00: setting system clock to 2025-11-08T00:33:58 UTC (1762562038) Nov 8 00:33:58.935577 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 8 00:33:58.935589 kernel: intel_pstate: CPU model not supported Nov 8 00:33:58.935603 kernel: efifb: probing for efifb Nov 8 00:33:58.935613 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Nov 8 00:33:58.935622 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Nov 8 00:33:58.935631 kernel: efifb: scrolling: redraw Nov 8 00:33:58.935640 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 8 00:33:58.935649 kernel: Console: switching to colour frame buffer device 100x37 Nov 8 00:33:58.935658 kernel: fb0: EFI VGA frame buffer device Nov 8 00:33:58.935668 kernel: pstore: Using crash dump compression: deflate Nov 8 00:33:58.935677 kernel: pstore: Registered efi_pstore as persistent store backend Nov 8 00:33:58.935689 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:33:58.935698 kernel: Segment Routing with IPv6 Nov 8 00:33:58.935706 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:33:58.935716 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:33:58.935724 kernel: Key type dns_resolver registered Nov 8 00:33:58.935734 kernel: IPI shorthand broadcast: enabled Nov 8 00:33:58.935762 kernel: sched_clock: Marking stable (479002170, 127688851)->(670081298, -63390277) Nov 8 00:33:58.935774 kernel: registered taskstats version 1 Nov 8 00:33:58.935784 kernel: Loading compiled-in X.509 certificates Nov 8 00:33:58.935797 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:33:58.935806 kernel: Key type .fscrypt registered Nov 8 00:33:58.935816 kernel: Key type fscrypt-provisioning registered Nov 8 00:33:58.935825 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:33:58.935834 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:33:58.935844 kernel: ima: No architecture policies found Nov 8 00:33:58.935853 kernel: clk: Disabling unused clocks Nov 8 00:33:58.935862 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:33:58.935872 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:33:58.935883 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:33:58.935893 kernel: Run /init as init process Nov 8 00:33:58.935902 kernel: with arguments: Nov 8 00:33:58.935914 kernel: /init Nov 8 00:33:58.935923 kernel: with environment: Nov 8 00:33:58.935932 kernel: HOME=/ Nov 8 00:33:58.935941 kernel: TERM=linux Nov 8 00:33:58.935953 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:33:58.935967 systemd[1]: Detected virtualization amazon. Nov 8 00:33:58.935977 systemd[1]: Detected architecture x86-64. Nov 8 00:33:58.935986 systemd[1]: Running in initrd. Nov 8 00:33:58.935996 systemd[1]: No hostname configured, using default hostname. Nov 8 00:33:58.936005 systemd[1]: Hostname set to . Nov 8 00:33:58.936015 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:33:58.936025 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:33:58.936035 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:33:58.936047 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:33:58.936058 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:33:58.936068 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:33:58.936078 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:33:58.936090 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:33:58.936104 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:33:58.936114 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:33:58.936124 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:33:58.936133 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:33:58.936143 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:33:58.936153 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:33:58.936163 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:33:58.936176 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:33:58.936185 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:33:58.936195 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:33:58.936205 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:33:58.936215 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:33:58.936225 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:33:58.936235 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:33:58.936245 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:33:58.936255 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:33:58.936267 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:33:58.936277 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:33:58.936287 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:33:58.936296 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:33:58.936306 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:33:58.936316 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:33:58.936326 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:33:58.936336 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:33:58.937316 systemd-journald[178]: Collecting audit messages is disabled. Nov 8 00:33:58.937350 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:33:58.937361 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:33:58.937454 systemd-journald[178]: Journal started Nov 8 00:33:58.937476 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2b02cbc6c7d3f8a817b3a279db0a81) is 4.7M, max 38.2M, 33.4M free. Nov 8 00:33:58.930334 systemd-modules-load[179]: Inserted module 'overlay' Nov 8 00:33:58.944385 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:33:58.947532 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:33:58.953345 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:33:58.954175 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:33:58.958553 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:33:58.961194 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:33:58.968791 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:33:58.977390 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:33:58.979002 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:33:58.982437 kernel: Bridge firewalling registered Nov 8 00:33:58.982191 systemd-modules-load[179]: Inserted module 'br_netfilter' Nov 8 00:33:58.983723 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:33:58.985149 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:33:58.991927 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:33:58.996590 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:33:59.002543 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:33:59.005239 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:33:59.005681 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:33:59.015021 dracut-cmdline[211]: dracut-dracut-053 Nov 8 00:33:59.018831 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:33:59.016580 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:33:59.044681 systemd-resolved[216]: Positive Trust Anchors: Nov 8 00:33:59.044697 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:33:59.044734 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:33:59.049588 systemd-resolved[216]: Defaulting to hostname 'linux'. Nov 8 00:33:59.052069 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:33:59.052521 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:33:59.092405 kernel: SCSI subsystem initialized Nov 8 00:33:59.102487 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:33:59.114412 kernel: iscsi: registered transport (tcp) Nov 8 00:33:59.136459 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:33:59.136548 kernel: QLogic iSCSI HBA Driver Nov 8 00:33:59.176739 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:33:59.184584 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:33:59.209602 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:33:59.209683 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:33:59.210657 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:33:59.253401 kernel: raid6: avx512x4 gen() 18155 MB/s Nov 8 00:33:59.271394 kernel: raid6: avx512x2 gen() 18111 MB/s Nov 8 00:33:59.289396 kernel: raid6: avx512x1 gen() 18042 MB/s Nov 8 00:33:59.307390 kernel: raid6: avx2x4 gen() 17982 MB/s Nov 8 00:33:59.325393 kernel: raid6: avx2x2 gen() 18036 MB/s Nov 8 00:33:59.343575 kernel: raid6: avx2x1 gen() 13728 MB/s Nov 8 00:33:59.343628 kernel: raid6: using algorithm avx512x4 gen() 18155 MB/s Nov 8 00:33:59.362581 kernel: raid6: .... xor() 7446 MB/s, rmw enabled Nov 8 00:33:59.362644 kernel: raid6: using avx512x2 recovery algorithm Nov 8 00:33:59.384409 kernel: xor: automatically using best checksumming function avx Nov 8 00:33:59.550399 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:33:59.561153 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:33:59.570633 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:33:59.583620 systemd-udevd[397]: Using default interface naming scheme 'v255'. Nov 8 00:33:59.588779 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:33:59.598592 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:33:59.617343 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Nov 8 00:33:59.648687 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:33:59.654569 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:33:59.709212 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:33:59.720401 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:33:59.748437 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:33:59.751151 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:33:59.753792 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:33:59.755422 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:33:59.761651 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:33:59.801131 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:33:59.808803 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:33:59.843392 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:33:59.849140 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 8 00:33:59.849472 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 8 00:33:59.849648 kernel: AES CTR mode by8 optimization enabled Nov 8 00:33:59.851165 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:33:59.852626 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:33:59.855264 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:33:59.859489 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:33:59.862905 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Nov 8 00:33:59.859988 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:33:59.867093 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 8 00:33:59.867722 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 8 00:33:59.871152 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:fe:cf:87:5b:bf Nov 8 00:33:59.872250 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:33:59.881856 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:33:59.887968 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:33:59.888928 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:33:59.893549 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 8 00:33:59.898799 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:33:59.898865 kernel: GPT:9289727 != 33554431 Nov 8 00:33:59.898887 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:33:59.901816 kernel: GPT:9289727 != 33554431 Nov 8 00:33:59.901874 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:33:59.901894 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:33:59.905429 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:33:59.911580 (udev-worker)[447]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:33:59.932217 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:33:59.936622 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:33:59.966566 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:33:59.984577 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (455) Nov 8 00:33:59.997398 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/nvme0n1p3 scanned by (udev-worker) (446) Nov 8 00:34:00.135573 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 8 00:34:00.148622 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 8 00:34:00.163849 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 8 00:34:00.185345 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Nov 8 00:34:00.186076 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 8 00:34:00.194627 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:34:00.226041 disk-uuid[634]: Primary Header is updated. Nov 8 00:34:00.226041 disk-uuid[634]: Secondary Entries is updated. Nov 8 00:34:00.226041 disk-uuid[634]: Secondary Header is updated. Nov 8 00:34:00.236668 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:34:00.254963 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:34:01.256027 disk-uuid[635]: The operation has completed successfully. Nov 8 00:34:01.257090 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:34:01.403957 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:34:01.404102 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:34:01.421786 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:34:01.427551 sh[893]: Success Nov 8 00:34:01.443412 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 00:34:01.557334 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:34:01.566702 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:34:01.568996 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:34:01.614520 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:34:01.614595 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:34:01.617711 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:34:01.617778 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:34:01.620184 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:34:01.698391 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:34:01.701407 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:34:01.702793 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:34:01.714672 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:34:01.716598 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:34:01.742214 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:34:01.742275 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:34:01.742290 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 8 00:34:01.750035 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 8 00:34:01.760831 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:34:01.761419 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:34:01.767096 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:34:01.775645 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:34:01.830834 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:34:01.845342 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:34:01.875957 systemd-networkd[1085]: lo: Link UP Nov 8 00:34:01.875970 systemd-networkd[1085]: lo: Gained carrier Nov 8 00:34:01.877850 systemd-networkd[1085]: Enumeration completed Nov 8 00:34:01.878326 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:34:01.878331 systemd-networkd[1085]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:34:01.879652 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:34:01.881632 systemd[1]: Reached target network.target - Network. Nov 8 00:34:01.882488 systemd-networkd[1085]: eth0: Link UP Nov 8 00:34:01.882494 systemd-networkd[1085]: eth0: Gained carrier Nov 8 00:34:01.882508 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:34:01.893488 systemd-networkd[1085]: eth0: DHCPv4 address 172.31.19.248/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 8 00:34:02.027617 ignition[998]: Ignition 2.19.0 Nov 8 00:34:02.027628 ignition[998]: Stage: fetch-offline Nov 8 00:34:02.027835 ignition[998]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:34:02.029180 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:34:02.027844 ignition[998]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:34:02.028079 ignition[998]: Ignition finished successfully Nov 8 00:34:02.034568 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:34:02.051016 ignition[1093]: Ignition 2.19.0 Nov 8 00:34:02.051041 ignition[1093]: Stage: fetch Nov 8 00:34:02.051516 ignition[1093]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:34:02.051531 ignition[1093]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:34:02.051657 ignition[1093]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:34:02.059884 ignition[1093]: PUT result: OK Nov 8 00:34:02.062184 ignition[1093]: parsed url from cmdline: "" Nov 8 00:34:02.062195 ignition[1093]: no config URL provided Nov 8 00:34:02.062205 ignition[1093]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:34:02.062239 ignition[1093]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:34:02.062262 ignition[1093]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:34:02.063226 ignition[1093]: PUT result: OK Nov 8 00:34:02.063291 ignition[1093]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 8 00:34:02.064156 ignition[1093]: GET result: OK Nov 8 00:34:02.064252 ignition[1093]: parsing config with SHA512: ef576b598f6a51cfecfe779e8288cf78a82a7876de17efb3ac873c205aff3461e641eea07b0e6b06832bc85b676b518d61316d925d7145927cb4b8f8c73c574d Nov 8 00:34:02.069278 unknown[1093]: fetched base config from "system" Nov 8 00:34:02.069297 unknown[1093]: fetched base config from "system" Nov 8 00:34:02.070074 ignition[1093]: fetch: fetch complete Nov 8 00:34:02.069305 unknown[1093]: fetched user config from "aws" Nov 8 00:34:02.070082 ignition[1093]: fetch: fetch passed Nov 8 00:34:02.070286 ignition[1093]: Ignition finished successfully Nov 8 00:34:02.072577 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:34:02.077780 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:34:02.098697 ignition[1099]: Ignition 2.19.0 Nov 8 00:34:02.098712 ignition[1099]: Stage: kargs Nov 8 00:34:02.099192 ignition[1099]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:34:02.099207 ignition[1099]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:34:02.099337 ignition[1099]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:34:02.100236 ignition[1099]: PUT result: OK Nov 8 00:34:02.103277 ignition[1099]: kargs: kargs passed Nov 8 00:34:02.103380 ignition[1099]: Ignition finished successfully Nov 8 00:34:02.105240 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:34:02.110603 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:34:02.127349 ignition[1105]: Ignition 2.19.0 Nov 8 00:34:02.127364 ignition[1105]: Stage: disks Nov 8 00:34:02.127857 ignition[1105]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:34:02.127871 ignition[1105]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:34:02.127996 ignition[1105]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:34:02.128892 ignition[1105]: PUT result: OK Nov 8 00:34:02.131677 ignition[1105]: disks: disks passed Nov 8 00:34:02.131749 ignition[1105]: Ignition finished successfully Nov 8 00:34:02.133474 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:34:02.134321 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:34:02.134710 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:34:02.135235 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:34:02.135810 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:34:02.136409 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:34:02.142585 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:34:02.167010 systemd-fsck[1113]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:34:02.170929 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:34:02.176554 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:34:02.296389 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:34:02.297341 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:34:02.298506 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:34:02.307508 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:34:02.310035 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:34:02.310934 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:34:02.310983 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:34:02.311007 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:34:02.318465 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:34:02.325408 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1132) Nov 8 00:34:02.325449 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:34:02.332497 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:34:02.332540 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:34:02.332561 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 8 00:34:02.336393 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 8 00:34:02.338569 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:34:02.519848 initrd-setup-root[1157]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:34:02.535604 initrd-setup-root[1164]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:34:02.540820 initrd-setup-root[1171]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:34:02.545961 initrd-setup-root[1178]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:34:02.747848 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:34:02.752492 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:34:02.756574 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:34:02.766593 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:34:02.768565 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:34:02.806114 ignition[1246]: INFO : Ignition 2.19.0 Nov 8 00:34:02.806114 ignition[1246]: INFO : Stage: mount Nov 8 00:34:02.808027 ignition[1246]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:34:02.808027 ignition[1246]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:34:02.808027 ignition[1246]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:34:02.810950 ignition[1246]: INFO : PUT result: OK Nov 8 00:34:02.812306 ignition[1246]: INFO : mount: mount passed Nov 8 00:34:02.812843 ignition[1246]: INFO : Ignition finished successfully Nov 8 00:34:02.812449 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:34:02.814918 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:34:02.820591 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:34:02.833676 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:34:02.850696 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1259) Nov 8 00:34:02.850773 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:34:02.852475 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:34:02.854988 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 8 00:34:02.859401 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 8 00:34:02.861917 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:34:02.884114 ignition[1276]: INFO : Ignition 2.19.0 Nov 8 00:34:02.884114 ignition[1276]: INFO : Stage: files Nov 8 00:34:02.885605 ignition[1276]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:34:02.885605 ignition[1276]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:34:02.885605 ignition[1276]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:34:02.886944 ignition[1276]: INFO : PUT result: OK Nov 8 00:34:02.888816 ignition[1276]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:34:02.890271 ignition[1276]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:34:02.890271 ignition[1276]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:34:02.911477 ignition[1276]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:34:02.912228 ignition[1276]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:34:02.912228 ignition[1276]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:34:02.911981 unknown[1276]: wrote ssh authorized keys file for user: core Nov 8 00:34:02.924997 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:34:02.925965 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:34:02.925965 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:34:02.925965 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 8 00:34:03.737701 systemd-networkd[1085]: eth0: Gained IPv6LL Nov 8 00:34:03.998969 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 8 00:34:04.204208 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:34:04.205275 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:34:04.205275 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:34:04.205275 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:34:04.205275 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:34:04.205275 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:34:04.205275 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:34:04.205275 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:34:04.205275 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:34:04.205275 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:34:04.205275 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:34:04.205275 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:34:04.205275 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:34:04.205275 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:34:04.215508 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 8 00:34:04.645952 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 8 00:34:04.978486 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:34:04.978486 ignition[1276]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 8 00:34:04.980295 ignition[1276]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:34:04.981158 ignition[1276]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:34:04.981158 ignition[1276]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 8 00:34:04.981158 ignition[1276]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 8 00:34:04.981158 ignition[1276]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:34:04.981158 ignition[1276]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:34:04.981158 ignition[1276]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 8 00:34:04.981158 ignition[1276]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:34:04.981158 ignition[1276]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:34:04.981158 ignition[1276]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:34:04.981158 ignition[1276]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:34:04.981158 ignition[1276]: INFO : files: files passed Nov 8 00:34:04.981158 ignition[1276]: INFO : Ignition finished successfully Nov 8 00:34:04.983317 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:34:04.992127 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:34:04.994542 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:34:04.996310 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:34:04.996424 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:34:05.011323 initrd-setup-root-after-ignition[1304]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:34:05.012384 initrd-setup-root-after-ignition[1308]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:34:05.013289 initrd-setup-root-after-ignition[1304]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:34:05.013620 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:34:05.014934 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:34:05.020657 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:34:05.045272 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:34:05.045388 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:34:05.046803 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:34:05.047407 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:34:05.048141 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:34:05.049683 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:34:05.066686 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:34:05.073703 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:34:05.082907 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:34:05.083537 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:34:05.084334 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:34:05.085091 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:34:05.085227 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:34:05.086355 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:34:05.087157 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:34:05.087843 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:34:05.088535 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:34:05.089204 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:34:05.090029 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:34:05.090702 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:34:05.091390 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:34:05.092364 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:34:05.093087 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:34:05.093883 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:34:05.094001 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:34:05.094943 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:34:05.095680 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:34:05.096384 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:34:05.096517 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:34:05.097111 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:34:05.097223 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:34:05.098264 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:34:05.098392 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:34:05.098945 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:34:05.099038 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:34:05.109880 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:34:05.110552 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:34:05.110831 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:34:05.113915 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:34:05.114436 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:34:05.114662 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:34:05.117827 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:34:05.118048 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:34:05.131284 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:34:05.131437 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:34:05.139217 ignition[1328]: INFO : Ignition 2.19.0 Nov 8 00:34:05.139217 ignition[1328]: INFO : Stage: umount Nov 8 00:34:05.142005 ignition[1328]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:34:05.142005 ignition[1328]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:34:05.142005 ignition[1328]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:34:05.142005 ignition[1328]: INFO : PUT result: OK Nov 8 00:34:05.147068 ignition[1328]: INFO : umount: umount passed Nov 8 00:34:05.147788 ignition[1328]: INFO : Ignition finished successfully Nov 8 00:34:05.150384 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:34:05.151231 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:34:05.153640 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:34:05.154246 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:34:05.154305 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:34:05.155551 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:34:05.155610 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:34:05.156096 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:34:05.156152 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:34:05.156672 systemd[1]: Stopped target network.target - Network. Nov 8 00:34:05.157118 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:34:05.157186 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:34:05.160565 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:34:05.160890 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:34:05.164455 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:34:05.164878 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:34:05.165918 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:34:05.166582 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:34:05.166640 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:34:05.167152 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:34:05.167204 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:34:05.167750 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:34:05.167819 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:34:05.168386 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:34:05.168449 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:34:05.169170 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:34:05.169909 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:34:05.170860 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:34:05.170979 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:34:05.172233 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:34:05.172348 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:34:05.174681 systemd-networkd[1085]: eth0: DHCPv6 lease lost Nov 8 00:34:05.176533 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:34:05.176696 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:34:05.177769 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:34:05.177822 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:34:05.184519 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:34:05.185075 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:34:05.185152 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:34:05.185901 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:34:05.186780 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:34:05.186916 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:34:05.200030 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:34:05.200151 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:34:05.203086 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:34:05.203160 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:34:05.203923 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:34:05.203988 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:34:05.205129 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:34:05.205268 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:34:05.206602 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:34:05.206788 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:34:05.209292 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:34:05.209785 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:34:05.210288 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:34:05.210338 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:34:05.211401 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:34:05.211466 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:34:05.212566 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:34:05.212627 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:34:05.213790 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:34:05.213852 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:34:05.220672 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:34:05.222180 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:34:05.222265 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:34:05.223493 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:34:05.223558 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:34:05.229391 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:34:05.229650 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:34:05.230626 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:34:05.234551 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:34:05.247482 systemd[1]: Switching root. Nov 8 00:34:05.278109 systemd-journald[178]: Journal stopped Nov 8 00:34:06.748001 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Nov 8 00:34:06.748066 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:34:06.748083 kernel: SELinux: policy capability open_perms=1 Nov 8 00:34:06.748095 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:34:06.748107 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:34:06.748119 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:34:06.748131 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:34:06.748143 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:34:06.748158 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:34:06.748170 kernel: audit: type=1403 audit(1762562045.812:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:34:06.748184 systemd[1]: Successfully loaded SELinux policy in 39.806ms. Nov 8 00:34:06.748204 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.796ms. Nov 8 00:34:06.748218 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:34:06.748231 systemd[1]: Detected virtualization amazon. Nov 8 00:34:06.748243 systemd[1]: Detected architecture x86-64. Nov 8 00:34:06.748255 systemd[1]: Detected first boot. Nov 8 00:34:06.748268 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:34:06.748283 zram_generator::config[1388]: No configuration found. Nov 8 00:34:06.748300 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:34:06.748312 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:34:06.748325 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 8 00:34:06.748338 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:34:06.748350 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:34:06.748364 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:34:06.753438 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:34:06.753463 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:34:06.753484 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:34:06.753497 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:34:06.753510 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:34:06.753523 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:34:06.753536 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:34:06.753554 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:34:06.753566 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:34:06.753579 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:34:06.753594 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:34:06.753608 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:34:06.753620 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:34:06.753633 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:34:06.753646 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:34:06.753659 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:34:06.753672 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:34:06.753685 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:34:06.753704 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:34:06.753717 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:34:06.753729 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:34:06.753743 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:34:06.753756 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:34:06.753768 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:34:06.753781 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:34:06.753793 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:34:06.753806 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:34:06.753821 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:34:06.753834 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:34:06.753848 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:34:06.753860 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:34:06.753873 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:34:06.753885 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:34:06.753897 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:34:06.753911 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:34:06.753923 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:34:06.753939 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:34:06.753951 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:34:06.753964 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:34:06.753977 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:34:06.753990 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:34:06.754002 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:34:06.754015 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:34:06.754028 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 8 00:34:06.754045 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 8 00:34:06.754057 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:34:06.754069 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:34:06.754081 kernel: fuse: init (API version 7.39) Nov 8 00:34:06.754094 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:34:06.754106 kernel: loop: module loaded Nov 8 00:34:06.754119 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:34:06.754131 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:34:06.754145 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:34:06.754162 kernel: ACPI: bus type drm_connector registered Nov 8 00:34:06.754174 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:34:06.754187 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:34:06.754200 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:34:06.754212 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:34:06.754259 systemd-journald[1503]: Collecting audit messages is disabled. Nov 8 00:34:06.754285 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:34:06.754300 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:34:06.754314 systemd-journald[1503]: Journal started Nov 8 00:34:06.754339 systemd-journald[1503]: Runtime Journal (/run/log/journal/ec2b02cbc6c7d3f8a817b3a279db0a81) is 4.7M, max 38.2M, 33.4M free. Nov 8 00:34:06.758543 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:34:06.758917 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:34:06.759818 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:34:06.760751 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:34:06.760963 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:34:06.761691 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:34:06.761838 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:34:06.762800 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:34:06.763009 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:34:06.763680 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:34:06.763826 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:34:06.764652 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:34:06.764799 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:34:06.765498 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:34:06.765653 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:34:06.766583 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:34:06.767196 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:34:06.768171 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:34:06.778574 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:34:06.783561 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:34:06.786546 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:34:06.792225 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:34:06.800549 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:34:06.803609 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:34:06.806625 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:34:06.813763 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:34:06.814505 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:34:06.819683 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:34:06.829087 systemd-journald[1503]: Time spent on flushing to /var/log/journal/ec2b02cbc6c7d3f8a817b3a279db0a81 is 54.258ms for 966 entries. Nov 8 00:34:06.829087 systemd-journald[1503]: System Journal (/var/log/journal/ec2b02cbc6c7d3f8a817b3a279db0a81) is 8.0M, max 195.6M, 187.6M free. Nov 8 00:34:06.899785 systemd-journald[1503]: Received client request to flush runtime journal. Nov 8 00:34:06.828607 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:34:06.842670 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:34:06.850490 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:34:06.851265 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:34:06.855657 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:34:06.875792 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:34:06.886581 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:34:06.896754 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:34:06.907674 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:34:06.912660 systemd-tmpfiles[1540]: ACLs are not supported, ignoring. Nov 8 00:34:06.912681 systemd-tmpfiles[1540]: ACLs are not supported, ignoring. Nov 8 00:34:06.917023 udevadm[1549]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 8 00:34:06.919411 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:34:06.926630 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:34:06.964671 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:34:06.972695 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:34:06.990310 systemd-tmpfiles[1562]: ACLs are not supported, ignoring. Nov 8 00:34:06.990697 systemd-tmpfiles[1562]: ACLs are not supported, ignoring. Nov 8 00:34:06.997871 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:34:07.443881 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:34:07.452573 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:34:07.477648 systemd-udevd[1568]: Using default interface naming scheme 'v255'. Nov 8 00:34:07.521840 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:34:07.531525 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:34:07.549573 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:34:07.581920 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 8 00:34:07.626527 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:34:07.629361 (udev-worker)[1574]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:34:07.685391 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 8 00:34:07.689399 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Nov 8 00:34:07.702080 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:34:07.710445 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Nov 8 00:34:07.715336 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Nov 8 00:34:07.723394 kernel: ACPI: button: Sleep Button [SLPF] Nov 8 00:34:07.735148 systemd-networkd[1573]: lo: Link UP Nov 8 00:34:07.735923 systemd-networkd[1573]: lo: Gained carrier Nov 8 00:34:07.739775 systemd-networkd[1573]: Enumeration completed Nov 8 00:34:07.741165 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:34:07.741297 systemd-networkd[1573]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:34:07.741306 systemd-networkd[1573]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:34:07.746476 systemd-networkd[1573]: eth0: Link UP Nov 8 00:34:07.746832 systemd-networkd[1573]: eth0: Gained carrier Nov 8 00:34:07.747452 systemd-networkd[1573]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:34:07.748783 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:34:07.756943 systemd-networkd[1573]: eth0: DHCPv4 address 172.31.19.248/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 8 00:34:07.799819 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:34:07.805557 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:34:07.809292 systemd-networkd[1573]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:34:07.813669 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:34:07.814020 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:34:07.823792 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:34:07.846191 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1582) Nov 8 00:34:07.954797 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:34:07.988908 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:34:08.003143 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 8 00:34:08.008550 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:34:08.026805 lvm[1695]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:34:08.055328 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:34:08.056451 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:34:08.060559 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:34:08.071591 lvm[1698]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:34:08.098395 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:34:08.099429 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:34:08.099840 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:34:08.099865 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:34:08.100230 systemd[1]: Reached target machines.target - Containers. Nov 8 00:34:08.101806 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:34:08.106606 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:34:08.110231 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:34:08.112063 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:34:08.117569 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:34:08.122359 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:34:08.136539 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:34:08.139706 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:34:08.153514 kernel: loop0: detected capacity change from 0 to 224512 Nov 8 00:34:08.151701 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:34:08.171400 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:34:08.172882 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:34:08.236409 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:34:08.260418 kernel: loop1: detected capacity change from 0 to 142488 Nov 8 00:34:08.343392 kernel: loop2: detected capacity change from 0 to 61336 Nov 8 00:34:08.437701 kernel: loop3: detected capacity change from 0 to 140768 Nov 8 00:34:08.520591 kernel: loop4: detected capacity change from 0 to 224512 Nov 8 00:34:08.549398 kernel: loop5: detected capacity change from 0 to 142488 Nov 8 00:34:08.570590 kernel: loop6: detected capacity change from 0 to 61336 Nov 8 00:34:08.586391 kernel: loop7: detected capacity change from 0 to 140768 Nov 8 00:34:08.611278 (sd-merge)[1719]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Nov 8 00:34:08.611805 (sd-merge)[1719]: Merged extensions into '/usr'. Nov 8 00:34:08.616867 systemd[1]: Reloading requested from client PID 1706 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:34:08.616992 systemd[1]: Reloading... Nov 8 00:34:08.671410 zram_generator::config[1747]: No configuration found. Nov 8 00:34:08.834524 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:34:08.918906 systemd[1]: Reloading finished in 301 ms. Nov 8 00:34:08.945938 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:34:08.959848 systemd[1]: Starting ensure-sysext.service... Nov 8 00:34:08.963593 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:34:08.977542 systemd[1]: Reloading requested from client PID 1804 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:34:08.977703 systemd[1]: Reloading... Nov 8 00:34:09.018260 systemd-tmpfiles[1805]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:34:09.018857 systemd-tmpfiles[1805]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:34:09.020727 systemd-tmpfiles[1805]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:34:09.021384 systemd-tmpfiles[1805]: ACLs are not supported, ignoring. Nov 8 00:34:09.021730 systemd-tmpfiles[1805]: ACLs are not supported, ignoring. Nov 8 00:34:09.026238 systemd-tmpfiles[1805]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:34:09.026433 systemd-tmpfiles[1805]: Skipping /boot Nov 8 00:34:09.065353 zram_generator::config[1831]: No configuration found. Nov 8 00:34:09.065501 ldconfig[1702]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:34:09.061424 systemd-tmpfiles[1805]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:34:09.061441 systemd-tmpfiles[1805]: Skipping /boot Nov 8 00:34:09.236460 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:34:09.311677 systemd[1]: Reloading finished in 333 ms. Nov 8 00:34:09.331791 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:34:09.340340 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:34:09.352353 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:34:09.357840 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:34:09.362685 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:34:09.363541 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:34:09.370814 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:34:09.382759 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:34:09.390900 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:34:09.392431 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:34:09.398604 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:34:09.414457 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:34:09.423955 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:34:09.427601 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:34:09.433807 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:34:09.434060 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:34:09.440687 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:34:09.440937 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:34:09.442180 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:34:09.443114 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:34:09.459621 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:34:09.460076 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:34:09.468426 augenrules[1927]: No rules Nov 8 00:34:09.469848 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:34:09.476784 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:34:09.483775 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:34:09.488540 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:34:09.488861 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:34:09.496101 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:34:09.504358 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:34:09.504861 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:34:09.514959 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:34:09.529740 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:34:09.533795 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:34:09.534205 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:34:09.536173 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:34:09.538194 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:34:09.554157 systemd[1]: Finished ensure-sysext.service. Nov 8 00:34:09.558357 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:34:09.558693 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:34:09.566664 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:34:09.570179 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:34:09.571617 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:34:09.571688 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:34:09.571745 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:34:09.588569 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:34:09.589215 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:34:09.591210 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:34:09.593829 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:34:09.594067 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:34:09.596085 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:34:09.596621 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:34:09.602100 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:34:09.602170 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:34:09.621355 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:34:09.634807 systemd-resolved[1916]: Positive Trust Anchors: Nov 8 00:34:09.634825 systemd-resolved[1916]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:34:09.634872 systemd-resolved[1916]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:34:09.651239 systemd-resolved[1916]: Defaulting to hostname 'linux'. Nov 8 00:34:09.653732 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:34:09.654277 systemd[1]: Reached target network.target - Network. Nov 8 00:34:09.654684 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:34:09.655029 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:34:09.655480 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:34:09.655840 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:34:09.656335 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:34:09.656754 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:34:09.657067 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:34:09.657395 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:34:09.657429 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:34:09.657824 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:34:09.659127 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:34:09.660963 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:34:09.662625 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:34:09.665438 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:34:09.665879 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:34:09.666185 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:34:09.666645 systemd[1]: System is tainted: cgroupsv1 Nov 8 00:34:09.666678 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:34:09.666704 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:34:09.669466 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:34:09.672644 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:34:09.675088 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:34:09.677485 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:34:09.683510 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:34:09.685221 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:34:09.693991 systemd-networkd[1573]: eth0: Gained IPv6LL Nov 8 00:34:09.698320 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:34:09.703514 systemd[1]: Started ntpd.service - Network Time Service. Nov 8 00:34:09.706151 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:34:09.721817 jq[1968]: false Nov 8 00:34:09.729582 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 8 00:34:09.736540 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:34:09.748525 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:34:09.762161 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:34:09.763616 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:34:09.777689 extend-filesystems[1970]: Found loop4 Nov 8 00:34:09.778991 extend-filesystems[1970]: Found loop5 Nov 8 00:34:09.778991 extend-filesystems[1970]: Found loop6 Nov 8 00:34:09.778991 extend-filesystems[1970]: Found loop7 Nov 8 00:34:09.778991 extend-filesystems[1970]: Found nvme0n1 Nov 8 00:34:09.778991 extend-filesystems[1970]: Found nvme0n1p1 Nov 8 00:34:09.778991 extend-filesystems[1970]: Found nvme0n1p2 Nov 8 00:34:09.778991 extend-filesystems[1970]: Found nvme0n1p3 Nov 8 00:34:09.778991 extend-filesystems[1970]: Found usr Nov 8 00:34:09.778991 extend-filesystems[1970]: Found nvme0n1p4 Nov 8 00:34:09.778991 extend-filesystems[1970]: Found nvme0n1p6 Nov 8 00:34:09.778991 extend-filesystems[1970]: Found nvme0n1p7 Nov 8 00:34:09.778991 extend-filesystems[1970]: Found nvme0n1p9 Nov 8 00:34:09.778991 extend-filesystems[1970]: Checking size of /dev/nvme0n1p9 Nov 8 00:34:09.777983 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:34:09.791703 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:34:09.798331 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:34:09.800113 coreos-metadata[1966]: Nov 08 00:34:09.799 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 8 00:34:09.802628 coreos-metadata[1966]: Nov 08 00:34:09.802 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 8 00:34:09.804909 coreos-metadata[1966]: Nov 08 00:34:09.804 INFO Fetch successful Nov 8 00:34:09.804909 coreos-metadata[1966]: Nov 08 00:34:09.804 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 8 00:34:09.810553 coreos-metadata[1966]: Nov 08 00:34:09.807 INFO Fetch successful Nov 8 00:34:09.810553 coreos-metadata[1966]: Nov 08 00:34:09.810 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 8 00:34:09.810393 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:34:09.810634 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:34:09.810920 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:34:09.811116 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:34:09.811573 coreos-metadata[1966]: Nov 08 00:34:09.811 INFO Fetch successful Nov 8 00:34:09.811573 coreos-metadata[1966]: Nov 08 00:34:09.811 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 8 00:34:09.812020 coreos-metadata[1966]: Nov 08 00:34:09.811 INFO Fetch successful Nov 8 00:34:09.812020 coreos-metadata[1966]: Nov 08 00:34:09.811 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 8 00:34:09.812686 coreos-metadata[1966]: Nov 08 00:34:09.812 INFO Fetch failed with 404: resource not found Nov 8 00:34:09.812686 coreos-metadata[1966]: Nov 08 00:34:09.812 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 8 00:34:09.813027 coreos-metadata[1966]: Nov 08 00:34:09.812 INFO Fetch successful Nov 8 00:34:09.814450 coreos-metadata[1966]: Nov 08 00:34:09.813 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 8 00:34:09.815491 dbus-daemon[1967]: [system] SELinux support is enabled Nov 8 00:34:09.824051 coreos-metadata[1966]: Nov 08 00:34:09.816 INFO Fetch successful Nov 8 00:34:09.824051 coreos-metadata[1966]: Nov 08 00:34:09.816 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 8 00:34:09.824051 coreos-metadata[1966]: Nov 08 00:34:09.817 INFO Fetch successful Nov 8 00:34:09.824051 coreos-metadata[1966]: Nov 08 00:34:09.817 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 8 00:34:09.824051 coreos-metadata[1966]: Nov 08 00:34:09.817 INFO Fetch successful Nov 8 00:34:09.824051 coreos-metadata[1966]: Nov 08 00:34:09.817 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 8 00:34:09.824229 jq[1994]: true Nov 8 00:34:09.835197 update_engine[1992]: I20251108 00:34:09.829435 1992 main.cc:92] Flatcar Update Engine starting Nov 8 00:34:09.835197 update_engine[1992]: I20251108 00:34:09.834836 1992 update_check_scheduler.cc:74] Next update check in 5m25s Nov 8 00:34:09.839515 coreos-metadata[1966]: Nov 08 00:34:09.828 INFO Fetch successful Nov 8 00:34:09.824578 dbus-daemon[1967]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1573 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 8 00:34:09.826458 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:34:09.839102 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:34:09.839326 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:34:09.847538 extend-filesystems[1970]: Resized partition /dev/nvme0n1p9 Nov 8 00:34:09.856412 extend-filesystems[2012]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:34:09.859572 ntpd[1972]: ntpd 4.2.8p17@1.4004-o Fri Nov 7 22:06:24 UTC 2025 (1): Starting Nov 8 00:34:09.860713 ntpd[1972]: 8 Nov 00:34:09 ntpd[1972]: ntpd 4.2.8p17@1.4004-o Fri Nov 7 22:06:24 UTC 2025 (1): Starting Nov 8 00:34:09.860713 ntpd[1972]: 8 Nov 00:34:09 ntpd[1972]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 8 00:34:09.860713 ntpd[1972]: 8 Nov 00:34:09 ntpd[1972]: ---------------------------------------------------- Nov 8 00:34:09.860713 ntpd[1972]: 8 Nov 00:34:09 ntpd[1972]: ntp-4 is maintained by Network Time Foundation, Nov 8 00:34:09.860713 ntpd[1972]: 8 Nov 00:34:09 ntpd[1972]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 8 00:34:09.860713 ntpd[1972]: 8 Nov 00:34:09 ntpd[1972]: corporation. Support and training for ntp-4 are Nov 8 00:34:09.860713 ntpd[1972]: 8 Nov 00:34:09 ntpd[1972]: available at https://www.nwtime.org/support Nov 8 00:34:09.860713 ntpd[1972]: 8 Nov 00:34:09 ntpd[1972]: ---------------------------------------------------- Nov 8 00:34:09.859593 ntpd[1972]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 8 00:34:09.859601 ntpd[1972]: ---------------------------------------------------- Nov 8 00:34:09.859609 ntpd[1972]: ntp-4 is maintained by Network Time Foundation, Nov 8 00:34:09.859616 ntpd[1972]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 8 00:34:09.859623 ntpd[1972]: corporation. Support and training for ntp-4 are Nov 8 00:34:09.859630 ntpd[1972]: available at https://www.nwtime.org/support Nov 8 00:34:09.859638 ntpd[1972]: ---------------------------------------------------- Nov 8 00:34:09.872921 ntpd[1972]: 8 Nov 00:34:09 ntpd[1972]: proto: precision = 0.057 usec (-24) Nov 8 00:34:09.872921 ntpd[1972]: 8 Nov 00:34:09 ntpd[1972]: basedate set to 2025-10-26 Nov 8 00:34:09.872921 ntpd[1972]: 8 Nov 00:34:09 ntpd[1972]: gps base set to 2025-10-26 (week 2390) Nov 8 00:34:09.868784 ntpd[1972]: proto: precision = 0.057 usec (-24) Nov 8 00:34:09.869057 ntpd[1972]: basedate set to 2025-10-26 Nov 8 00:34:09.869068 ntpd[1972]: gps base set to 2025-10-26 (week 2390) Nov 8 00:34:09.877421 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Nov 8 00:34:09.883010 ntpd[1972]: Listen and drop on 0 v6wildcard [::]:123 Nov 8 00:34:09.886123 ntpd[1972]: 8 Nov 00:34:09 ntpd[1972]: Listen and drop on 0 v6wildcard [::]:123 Nov 8 00:34:09.886123 ntpd[1972]: 8 Nov 00:34:09 ntpd[1972]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 8 00:34:09.886123 ntpd[1972]: 8 Nov 00:34:09 ntpd[1972]: Listen normally on 2 lo 127.0.0.1:123 Nov 8 00:34:09.886123 ntpd[1972]: 8 Nov 00:34:09 ntpd[1972]: Listen normally on 3 eth0 172.31.19.248:123 Nov 8 00:34:09.886123 ntpd[1972]: 8 Nov 00:34:09 ntpd[1972]: Listen normally on 4 lo [::1]:123 Nov 8 00:34:09.886123 ntpd[1972]: 8 Nov 00:34:09 ntpd[1972]: Listen normally on 5 eth0 [fe80::4fe:cfff:fe87:5bbf%2]:123 Nov 8 00:34:09.886123 ntpd[1972]: 8 Nov 00:34:09 ntpd[1972]: Listening on routing socket on fd #22 for interface updates Nov 8 00:34:09.885467 ntpd[1972]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 8 00:34:09.885645 ntpd[1972]: Listen normally on 2 lo 127.0.0.1:123 Nov 8 00:34:09.885674 ntpd[1972]: Listen normally on 3 eth0 172.31.19.248:123 Nov 8 00:34:09.885706 ntpd[1972]: Listen normally on 4 lo [::1]:123 Nov 8 00:34:09.885745 ntpd[1972]: Listen normally on 5 eth0 [fe80::4fe:cfff:fe87:5bbf%2]:123 Nov 8 00:34:09.885773 ntpd[1972]: Listening on routing socket on fd #22 for interface updates Nov 8 00:34:09.895831 (ntainerd)[2015]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:34:09.902598 ntpd[1972]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:34:09.913432 ntpd[1972]: 8 Nov 00:34:09 ntpd[1972]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:34:09.913432 ntpd[1972]: 8 Nov 00:34:09 ntpd[1972]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:34:09.909482 ntpd[1972]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:34:09.918409 jq[2011]: true Nov 8 00:34:09.935117 dbus-daemon[1967]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 8 00:34:09.944757 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:34:09.951287 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:34:09.952474 tar[2002]: linux-amd64/LICENSE Nov 8 00:34:09.952474 tar[2002]: linux-amd64/helm Nov 8 00:34:09.959223 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:34:09.972497 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:34:09.983873 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:34:09.985089 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:34:09.985203 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:34:09.985228 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:34:10.013544 systemd-logind[1990]: Watching system buttons on /dev/input/event1 (Power Button) Nov 8 00:34:10.013609 systemd-logind[1990]: Watching system buttons on /dev/input/event3 (Sleep Button) Nov 8 00:34:10.013629 systemd-logind[1990]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:34:10.013847 systemd-logind[1990]: New seat seat0. Nov 8 00:34:10.014683 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 8 00:34:10.019075 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:34:10.019107 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:34:10.022258 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:34:10.028536 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:34:10.034886 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:34:10.037854 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 8 00:34:10.054067 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 8 00:34:10.081390 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Nov 8 00:34:10.120381 extend-filesystems[2012]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 8 00:34:10.120381 extend-filesystems[2012]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 8 00:34:10.120381 extend-filesystems[2012]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Nov 8 00:34:10.122744 extend-filesystems[1970]: Resized filesystem in /dev/nvme0n1p9 Nov 8 00:34:10.122752 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:34:10.122980 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:34:10.142404 bash[2071]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:34:10.145211 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:34:10.157395 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1569) Nov 8 00:34:10.158743 systemd[1]: Starting sshkeys.service... Nov 8 00:34:10.188450 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 8 00:34:10.197773 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 8 00:34:10.202770 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:34:10.265669 amazon-ssm-agent[2066]: Initializing new seelog logger Nov 8 00:34:10.273122 amazon-ssm-agent[2066]: New Seelog Logger Creation Complete Nov 8 00:34:10.273122 amazon-ssm-agent[2066]: 2025/11/08 00:34:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:34:10.273122 amazon-ssm-agent[2066]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:34:10.273122 amazon-ssm-agent[2066]: 2025/11/08 00:34:10 processing appconfig overrides Nov 8 00:34:10.275342 locksmithd[2059]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:34:10.279666 amazon-ssm-agent[2066]: 2025-11-08 00:34:10 INFO Proxy environment variables: Nov 8 00:34:10.279666 amazon-ssm-agent[2066]: 2025/11/08 00:34:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:34:10.279666 amazon-ssm-agent[2066]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:34:10.282258 amazon-ssm-agent[2066]: 2025/11/08 00:34:10 processing appconfig overrides Nov 8 00:34:10.282258 amazon-ssm-agent[2066]: 2025/11/08 00:34:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:34:10.282258 amazon-ssm-agent[2066]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:34:10.282258 amazon-ssm-agent[2066]: 2025/11/08 00:34:10 processing appconfig overrides Nov 8 00:34:10.304397 amazon-ssm-agent[2066]: 2025/11/08 00:34:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:34:10.304397 amazon-ssm-agent[2066]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:34:10.304397 amazon-ssm-agent[2066]: 2025/11/08 00:34:10 processing appconfig overrides Nov 8 00:34:10.390692 amazon-ssm-agent[2066]: 2025-11-08 00:34:10 INFO https_proxy: Nov 8 00:34:10.466765 coreos-metadata[2089]: Nov 08 00:34:10.466 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 8 00:34:10.472025 coreos-metadata[2089]: Nov 08 00:34:10.471 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 8 00:34:10.473693 coreos-metadata[2089]: Nov 08 00:34:10.473 INFO Fetch successful Nov 8 00:34:10.473693 coreos-metadata[2089]: Nov 08 00:34:10.473 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 8 00:34:10.474834 coreos-metadata[2089]: Nov 08 00:34:10.474 INFO Fetch successful Nov 8 00:34:10.478107 unknown[2089]: wrote ssh authorized keys file for user: core Nov 8 00:34:10.498313 amazon-ssm-agent[2066]: 2025-11-08 00:34:10 INFO http_proxy: Nov 8 00:34:10.521828 update-ssh-keys[2195]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:34:10.524345 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 8 00:34:10.530846 systemd[1]: Finished sshkeys.service. Nov 8 00:34:10.543249 dbus-daemon[1967]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 8 00:34:10.543868 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 8 00:34:10.545158 dbus-daemon[1967]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2055 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 8 00:34:10.554628 systemd[1]: Starting polkit.service - Authorization Manager... Nov 8 00:34:10.572270 polkitd[2201]: Started polkitd version 121 Nov 8 00:34:10.589197 polkitd[2201]: Loading rules from directory /etc/polkit-1/rules.d Nov 8 00:34:10.589262 polkitd[2201]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 8 00:34:10.591914 polkitd[2201]: Finished loading, compiling and executing 2 rules Nov 8 00:34:10.592692 systemd[1]: Started polkit.service - Authorization Manager. Nov 8 00:34:10.592533 dbus-daemon[1967]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 8 00:34:10.594598 polkitd[2201]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 8 00:34:10.597747 amazon-ssm-agent[2066]: 2025-11-08 00:34:10 INFO no_proxy: Nov 8 00:34:10.618902 containerd[2015]: time="2025-11-08T00:34:10.618817207Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:34:10.630628 systemd-hostnamed[2055]: Hostname set to (transient) Nov 8 00:34:10.630742 systemd-resolved[1916]: System hostname changed to 'ip-172-31-19-248'. Nov 8 00:34:10.698386 amazon-ssm-agent[2066]: 2025-11-08 00:34:10 INFO Checking if agent identity type OnPrem can be assumed Nov 8 00:34:10.716497 containerd[2015]: time="2025-11-08T00:34:10.716448130Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:34:10.722737 containerd[2015]: time="2025-11-08T00:34:10.721591469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:34:10.722737 containerd[2015]: time="2025-11-08T00:34:10.721638525Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:34:10.722737 containerd[2015]: time="2025-11-08T00:34:10.721657284Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:34:10.722737 containerd[2015]: time="2025-11-08T00:34:10.721807281Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:34:10.722737 containerd[2015]: time="2025-11-08T00:34:10.721821702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:34:10.722737 containerd[2015]: time="2025-11-08T00:34:10.721871993Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:34:10.722737 containerd[2015]: time="2025-11-08T00:34:10.721883582Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:34:10.722737 containerd[2015]: time="2025-11-08T00:34:10.722094878Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:34:10.722737 containerd[2015]: time="2025-11-08T00:34:10.722109117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:34:10.722737 containerd[2015]: time="2025-11-08T00:34:10.722121018Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:34:10.722737 containerd[2015]: time="2025-11-08T00:34:10.722130246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:34:10.723026 containerd[2015]: time="2025-11-08T00:34:10.722191215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:34:10.723026 containerd[2015]: time="2025-11-08T00:34:10.722415442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:34:10.723026 containerd[2015]: time="2025-11-08T00:34:10.722558144Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:34:10.723026 containerd[2015]: time="2025-11-08T00:34:10.722572333Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:34:10.723026 containerd[2015]: time="2025-11-08T00:34:10.722663544Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:34:10.723026 containerd[2015]: time="2025-11-08T00:34:10.722701725Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:34:10.727120 containerd[2015]: time="2025-11-08T00:34:10.727084428Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:34:10.727269 containerd[2015]: time="2025-11-08T00:34:10.727255235Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:34:10.727413 containerd[2015]: time="2025-11-08T00:34:10.727399710Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:34:10.730394 containerd[2015]: time="2025-11-08T00:34:10.729395836Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:34:10.730394 containerd[2015]: time="2025-11-08T00:34:10.729418563Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:34:10.730394 containerd[2015]: time="2025-11-08T00:34:10.729584269Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:34:10.730394 containerd[2015]: time="2025-11-08T00:34:10.729904232Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:34:10.730394 containerd[2015]: time="2025-11-08T00:34:10.730009553Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:34:10.730394 containerd[2015]: time="2025-11-08T00:34:10.730025217Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:34:10.730394 containerd[2015]: time="2025-11-08T00:34:10.730037921Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:34:10.730394 containerd[2015]: time="2025-11-08T00:34:10.730056517Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:34:10.730394 containerd[2015]: time="2025-11-08T00:34:10.730069247Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:34:10.730394 containerd[2015]: time="2025-11-08T00:34:10.730080995Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:34:10.730394 containerd[2015]: time="2025-11-08T00:34:10.730094337Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:34:10.730394 containerd[2015]: time="2025-11-08T00:34:10.730109871Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:34:10.730394 containerd[2015]: time="2025-11-08T00:34:10.730122157Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:34:10.730394 containerd[2015]: time="2025-11-08T00:34:10.730135037Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:34:10.730718 containerd[2015]: time="2025-11-08T00:34:10.730147600Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:34:10.730718 containerd[2015]: time="2025-11-08T00:34:10.730167064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:34:10.730718 containerd[2015]: time="2025-11-08T00:34:10.730180474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:34:10.730718 containerd[2015]: time="2025-11-08T00:34:10.730192494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:34:10.730718 containerd[2015]: time="2025-11-08T00:34:10.730205008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:34:10.730718 containerd[2015]: time="2025-11-08T00:34:10.730217148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:34:10.730718 containerd[2015]: time="2025-11-08T00:34:10.730234276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:34:10.730718 containerd[2015]: time="2025-11-08T00:34:10.730246434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:34:10.730718 containerd[2015]: time="2025-11-08T00:34:10.730258041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:34:10.730718 containerd[2015]: time="2025-11-08T00:34:10.730279117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:34:10.730718 containerd[2015]: time="2025-11-08T00:34:10.730292540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:34:10.730718 containerd[2015]: time="2025-11-08T00:34:10.730304058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:34:10.730718 containerd[2015]: time="2025-11-08T00:34:10.730314816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:34:10.730718 containerd[2015]: time="2025-11-08T00:34:10.730326991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:34:10.730718 containerd[2015]: time="2025-11-08T00:34:10.730346648Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:34:10.731383 containerd[2015]: time="2025-11-08T00:34:10.731031029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:34:10.731383 containerd[2015]: time="2025-11-08T00:34:10.731058864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:34:10.731383 containerd[2015]: time="2025-11-08T00:34:10.731069698Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:34:10.731383 containerd[2015]: time="2025-11-08T00:34:10.731111425Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:34:10.731383 containerd[2015]: time="2025-11-08T00:34:10.731128846Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:34:10.731383 containerd[2015]: time="2025-11-08T00:34:10.731140032Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:34:10.731383 containerd[2015]: time="2025-11-08T00:34:10.731152069Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:34:10.731383 containerd[2015]: time="2025-11-08T00:34:10.731160915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:34:10.731383 containerd[2015]: time="2025-11-08T00:34:10.731173581Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:34:10.731383 containerd[2015]: time="2025-11-08T00:34:10.731186938Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:34:10.731383 containerd[2015]: time="2025-11-08T00:34:10.731197070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:34:10.731771 containerd[2015]: time="2025-11-08T00:34:10.731722647Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:34:10.734382 containerd[2015]: time="2025-11-08T00:34:10.732423323Z" level=info msg="Connect containerd service" Nov 8 00:34:10.734382 containerd[2015]: time="2025-11-08T00:34:10.732480036Z" level=info msg="using legacy CRI server" Nov 8 00:34:10.734382 containerd[2015]: time="2025-11-08T00:34:10.732487518Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:34:10.734382 containerd[2015]: time="2025-11-08T00:34:10.732586638Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:34:10.736630 containerd[2015]: time="2025-11-08T00:34:10.736589330Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:34:10.738904 containerd[2015]: time="2025-11-08T00:34:10.738483771Z" level=info msg="Start subscribing containerd event" Nov 8 00:34:10.738904 containerd[2015]: time="2025-11-08T00:34:10.738545891Z" level=info msg="Start recovering state" Nov 8 00:34:10.738904 containerd[2015]: time="2025-11-08T00:34:10.738619761Z" level=info msg="Start event monitor" Nov 8 00:34:10.738904 containerd[2015]: time="2025-11-08T00:34:10.738630204Z" level=info msg="Start snapshots syncer" Nov 8 00:34:10.738904 containerd[2015]: time="2025-11-08T00:34:10.738640163Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:34:10.738904 containerd[2015]: time="2025-11-08T00:34:10.738651046Z" level=info msg="Start streaming server" Nov 8 00:34:10.739265 containerd[2015]: time="2025-11-08T00:34:10.739242591Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:34:10.739362 containerd[2015]: time="2025-11-08T00:34:10.739349178Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:34:10.739484 containerd[2015]: time="2025-11-08T00:34:10.739472544Z" level=info msg="containerd successfully booted in 0.123985s" Nov 8 00:34:10.739599 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:34:10.796141 amazon-ssm-agent[2066]: 2025-11-08 00:34:10 INFO Checking if agent identity type EC2 can be assumed Nov 8 00:34:10.894573 amazon-ssm-agent[2066]: 2025-11-08 00:34:10 INFO Agent will take identity from EC2 Nov 8 00:34:10.994470 amazon-ssm-agent[2066]: 2025-11-08 00:34:10 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 8 00:34:11.015016 sshd_keygen[2013]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:34:11.061656 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:34:11.077675 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:34:11.093911 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:34:11.094154 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:34:11.095133 amazon-ssm-agent[2066]: 2025-11-08 00:34:10 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 8 00:34:11.108495 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:34:11.125745 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:34:11.135697 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:34:11.139886 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:34:11.141172 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:34:11.194353 amazon-ssm-agent[2066]: 2025-11-08 00:34:10 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 8 00:34:11.197321 amazon-ssm-agent[2066]: 2025-11-08 00:34:10 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Nov 8 00:34:11.197321 amazon-ssm-agent[2066]: 2025-11-08 00:34:10 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Nov 8 00:34:11.197477 amazon-ssm-agent[2066]: 2025-11-08 00:34:10 INFO [amazon-ssm-agent] Starting Core Agent Nov 8 00:34:11.197477 amazon-ssm-agent[2066]: 2025-11-08 00:34:10 INFO [amazon-ssm-agent] registrar detected. Attempting registration Nov 8 00:34:11.197477 amazon-ssm-agent[2066]: 2025-11-08 00:34:10 INFO [Registrar] Starting registrar module Nov 8 00:34:11.197477 amazon-ssm-agent[2066]: 2025-11-08 00:34:10 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Nov 8 00:34:11.197477 amazon-ssm-agent[2066]: 2025-11-08 00:34:11 INFO [EC2Identity] EC2 registration was successful. Nov 8 00:34:11.197477 amazon-ssm-agent[2066]: 2025-11-08 00:34:11 INFO [CredentialRefresher] credentialRefresher has started Nov 8 00:34:11.197477 amazon-ssm-agent[2066]: 2025-11-08 00:34:11 INFO [CredentialRefresher] Starting credentials refresher loop Nov 8 00:34:11.197477 amazon-ssm-agent[2066]: 2025-11-08 00:34:11 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 8 00:34:11.201770 tar[2002]: linux-amd64/README.md Nov 8 00:34:11.223823 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:34:11.293873 amazon-ssm-agent[2066]: 2025-11-08 00:34:11 INFO [CredentialRefresher] Next credential rotation will be in 30.824993915616666 minutes Nov 8 00:34:12.211088 amazon-ssm-agent[2066]: 2025-11-08 00:34:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 8 00:34:12.312349 amazon-ssm-agent[2066]: 2025-11-08 00:34:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2242) started Nov 8 00:34:12.412270 amazon-ssm-agent[2066]: 2025-11-08 00:34:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 8 00:34:12.459539 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:34:12.460893 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:34:12.462201 systemd[1]: Startup finished in 7.726s (kernel) + 6.688s (userspace) = 14.414s. Nov 8 00:34:12.463106 (kubelet)[2261]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:34:12.759705 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:34:12.764692 systemd[1]: Started sshd@0-172.31.19.248:22-139.178.89.65:47128.service - OpenSSH per-connection server daemon (139.178.89.65:47128). Nov 8 00:34:12.938100 sshd[2270]: Accepted publickey for core from 139.178.89.65 port 47128 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:34:12.941187 sshd[2270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:34:12.951772 systemd-logind[1990]: New session 1 of user core. Nov 8 00:34:12.953236 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:34:12.959337 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:34:12.979419 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:34:12.988705 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:34:12.997267 (systemd)[2276]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:34:13.132193 systemd[2276]: Queued start job for default target default.target. Nov 8 00:34:13.133056 systemd[2276]: Created slice app.slice - User Application Slice. Nov 8 00:34:13.133090 systemd[2276]: Reached target paths.target - Paths. Nov 8 00:34:13.133111 systemd[2276]: Reached target timers.target - Timers. Nov 8 00:34:13.139512 systemd[2276]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:34:13.150676 systemd[2276]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:34:13.150775 systemd[2276]: Reached target sockets.target - Sockets. Nov 8 00:34:13.150798 systemd[2276]: Reached target basic.target - Basic System. Nov 8 00:34:13.150866 systemd[2276]: Reached target default.target - Main User Target. Nov 8 00:34:13.150910 systemd[2276]: Startup finished in 145ms. Nov 8 00:34:13.151141 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:34:13.158717 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:34:13.305775 systemd[1]: Started sshd@1-172.31.19.248:22-139.178.89.65:47138.service - OpenSSH per-connection server daemon (139.178.89.65:47138). Nov 8 00:34:13.461516 kubelet[2261]: E1108 00:34:13.461333 2261 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:34:13.462102 sshd[2288]: Accepted publickey for core from 139.178.89.65 port 47138 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:34:13.464594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:34:13.465179 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:34:13.464775 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:34:13.472036 systemd-logind[1990]: New session 2 of user core. Nov 8 00:34:13.480784 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:34:13.608907 sshd[2288]: pam_unix(sshd:session): session closed for user core Nov 8 00:34:13.612537 systemd[1]: sshd@1-172.31.19.248:22-139.178.89.65:47138.service: Deactivated successfully. Nov 8 00:34:13.618071 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:34:13.619077 systemd-logind[1990]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:34:13.620087 systemd-logind[1990]: Removed session 2. Nov 8 00:34:13.636764 systemd[1]: Started sshd@2-172.31.19.248:22-139.178.89.65:47142.service - OpenSSH per-connection server daemon (139.178.89.65:47142). Nov 8 00:34:13.791790 sshd[2299]: Accepted publickey for core from 139.178.89.65 port 47142 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:34:13.793606 sshd[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:34:13.798552 systemd-logind[1990]: New session 3 of user core. Nov 8 00:34:13.807721 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:34:13.921579 sshd[2299]: pam_unix(sshd:session): session closed for user core Nov 8 00:34:13.925074 systemd[1]: sshd@2-172.31.19.248:22-139.178.89.65:47142.service: Deactivated successfully. Nov 8 00:34:13.928086 systemd-logind[1990]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:34:13.928812 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:34:13.929836 systemd-logind[1990]: Removed session 3. Nov 8 00:34:13.949771 systemd[1]: Started sshd@3-172.31.19.248:22-139.178.89.65:47150.service - OpenSSH per-connection server daemon (139.178.89.65:47150). Nov 8 00:34:14.104305 sshd[2307]: Accepted publickey for core from 139.178.89.65 port 47150 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:34:14.106028 sshd[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:34:14.110428 systemd-logind[1990]: New session 4 of user core. Nov 8 00:34:14.117879 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:34:14.238115 sshd[2307]: pam_unix(sshd:session): session closed for user core Nov 8 00:34:14.240661 systemd[1]: sshd@3-172.31.19.248:22-139.178.89.65:47150.service: Deactivated successfully. Nov 8 00:34:14.243562 systemd-logind[1990]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:34:14.244074 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:34:14.245958 systemd-logind[1990]: Removed session 4. Nov 8 00:34:14.269014 systemd[1]: Started sshd@4-172.31.19.248:22-139.178.89.65:47162.service - OpenSSH per-connection server daemon (139.178.89.65:47162). Nov 8 00:34:14.430920 sshd[2315]: Accepted publickey for core from 139.178.89.65 port 47162 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:34:14.432247 sshd[2315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:34:14.437506 systemd-logind[1990]: New session 5 of user core. Nov 8 00:34:14.446739 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:34:14.569049 sudo[2319]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:34:14.569352 sudo[2319]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:34:14.586017 sudo[2319]: pam_unix(sudo:session): session closed for user root Nov 8 00:34:14.610337 sshd[2315]: pam_unix(sshd:session): session closed for user core Nov 8 00:34:14.613197 systemd[1]: sshd@4-172.31.19.248:22-139.178.89.65:47162.service: Deactivated successfully. Nov 8 00:34:14.616520 systemd-logind[1990]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:34:14.617995 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:34:14.618921 systemd-logind[1990]: Removed session 5. Nov 8 00:34:14.638758 systemd[1]: Started sshd@5-172.31.19.248:22-139.178.89.65:32788.service - OpenSSH per-connection server daemon (139.178.89.65:32788). Nov 8 00:34:14.802306 sshd[2324]: Accepted publickey for core from 139.178.89.65 port 32788 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:34:14.803908 sshd[2324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:34:14.808820 systemd-logind[1990]: New session 6 of user core. Nov 8 00:34:14.810679 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:34:14.918863 sudo[2329]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:34:14.919275 sudo[2329]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:34:14.923045 sudo[2329]: pam_unix(sudo:session): session closed for user root Nov 8 00:34:14.928641 sudo[2328]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:34:14.929038 sudo[2328]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:34:14.944719 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:34:14.947225 auditctl[2332]: No rules Nov 8 00:34:14.947628 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:34:14.947984 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:34:14.962173 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:34:14.988321 augenrules[2351]: No rules Nov 8 00:34:14.990125 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:34:14.994646 sudo[2328]: pam_unix(sudo:session): session closed for user root Nov 8 00:34:15.018539 sshd[2324]: pam_unix(sshd:session): session closed for user core Nov 8 00:34:15.021980 systemd[1]: sshd@5-172.31.19.248:22-139.178.89.65:32788.service: Deactivated successfully. Nov 8 00:34:15.025225 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:34:15.025979 systemd-logind[1990]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:34:15.026914 systemd-logind[1990]: Removed session 6. Nov 8 00:34:15.054249 systemd[1]: Started sshd@6-172.31.19.248:22-139.178.89.65:32796.service - OpenSSH per-connection server daemon (139.178.89.65:32796). Nov 8 00:34:15.210901 sshd[2360]: Accepted publickey for core from 139.178.89.65 port 32796 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:34:15.212304 sshd[2360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:34:15.218019 systemd-logind[1990]: New session 7 of user core. Nov 8 00:34:15.223981 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:34:15.323980 sudo[2364]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:34:15.324399 sudo[2364]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:34:15.789865 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:34:15.790226 (dockerd)[2380]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:34:16.238826 dockerd[2380]: time="2025-11-08T00:34:16.238696331Z" level=info msg="Starting up" Nov 8 00:34:16.721998 systemd[1]: var-lib-docker-metacopy\x2dcheck2530935731-merged.mount: Deactivated successfully. Nov 8 00:34:16.740750 dockerd[2380]: time="2025-11-08T00:34:16.740701016Z" level=info msg="Loading containers: start." Nov 8 00:34:18.306416 systemd-resolved[1916]: Clock change detected. Flushing caches. Nov 8 00:34:18.329655 kernel: Initializing XFRM netlink socket Nov 8 00:34:18.375128 (udev-worker)[2403]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:34:18.433107 systemd-networkd[1573]: docker0: Link UP Nov 8 00:34:18.449525 dockerd[2380]: time="2025-11-08T00:34:18.449480064Z" level=info msg="Loading containers: done." Nov 8 00:34:18.470509 dockerd[2380]: time="2025-11-08T00:34:18.470445767Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:34:18.470713 dockerd[2380]: time="2025-11-08T00:34:18.470569701Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:34:18.470713 dockerd[2380]: time="2025-11-08T00:34:18.470695515Z" level=info msg="Daemon has completed initialization" Nov 8 00:34:18.504249 dockerd[2380]: time="2025-11-08T00:34:18.504166008Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:34:18.504470 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:34:19.489490 containerd[2015]: time="2025-11-08T00:34:19.489447648Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 8 00:34:19.993208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2762633320.mount: Deactivated successfully. Nov 8 00:34:21.547250 containerd[2015]: time="2025-11-08T00:34:21.547192584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:34:21.548830 containerd[2015]: time="2025-11-08T00:34:21.548408025Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 8 00:34:21.550196 containerd[2015]: time="2025-11-08T00:34:21.549788732Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:34:21.552760 containerd[2015]: time="2025-11-08T00:34:21.552722658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:34:21.554091 containerd[2015]: time="2025-11-08T00:34:21.554050619Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.064559143s" Nov 8 00:34:21.554227 containerd[2015]: time="2025-11-08T00:34:21.554208505Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 8 00:34:21.554981 containerd[2015]: time="2025-11-08T00:34:21.554925117Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 8 00:34:23.171402 containerd[2015]: time="2025-11-08T00:34:23.171323889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:34:23.172680 containerd[2015]: time="2025-11-08T00:34:23.172542173Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 8 00:34:23.175002 containerd[2015]: time="2025-11-08T00:34:23.173542637Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:34:23.177094 containerd[2015]: time="2025-11-08T00:34:23.176608189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:34:23.177928 containerd[2015]: time="2025-11-08T00:34:23.177887781Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.622700025s" Nov 8 00:34:23.178021 containerd[2015]: time="2025-11-08T00:34:23.177934485Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 8 00:34:23.178881 containerd[2015]: time="2025-11-08T00:34:23.178848359Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 8 00:34:24.543489 containerd[2015]: time="2025-11-08T00:34:24.543432396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:34:24.545650 containerd[2015]: time="2025-11-08T00:34:24.543540151Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 8 00:34:24.547154 containerd[2015]: time="2025-11-08T00:34:24.546688749Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:34:24.550899 containerd[2015]: time="2025-11-08T00:34:24.550849523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:34:24.552554 containerd[2015]: time="2025-11-08T00:34:24.552506485Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.373518283s" Nov 8 00:34:24.552712 containerd[2015]: time="2025-11-08T00:34:24.552559281Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 8 00:34:24.553114 containerd[2015]: time="2025-11-08T00:34:24.553085509Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 8 00:34:25.161497 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:34:25.168642 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:34:25.619977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3534052075.mount: Deactivated successfully. Nov 8 00:34:25.797882 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:34:25.810196 (kubelet)[2600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:34:25.897722 kubelet[2600]: E1108 00:34:25.896758 2600 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:34:25.904059 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:34:25.904329 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:34:26.423018 containerd[2015]: time="2025-11-08T00:34:26.422960939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:34:26.425113 containerd[2015]: time="2025-11-08T00:34:26.424933497Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 8 00:34:26.428569 containerd[2015]: time="2025-11-08T00:34:26.427315643Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:34:26.430768 containerd[2015]: time="2025-11-08T00:34:26.430717959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:34:26.431771 containerd[2015]: time="2025-11-08T00:34:26.431493354Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.878366668s" Nov 8 00:34:26.431771 containerd[2015]: time="2025-11-08T00:34:26.431527133Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 8 00:34:26.432169 containerd[2015]: time="2025-11-08T00:34:26.432151741Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 8 00:34:26.994593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3388537455.mount: Deactivated successfully. Nov 8 00:34:28.081073 containerd[2015]: time="2025-11-08T00:34:28.081012567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:34:28.082942 containerd[2015]: time="2025-11-08T00:34:28.082720530Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 8 00:34:28.084928 containerd[2015]: time="2025-11-08T00:34:28.084873096Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:34:28.089130 containerd[2015]: time="2025-11-08T00:34:28.088769951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:34:28.089856 containerd[2015]: time="2025-11-08T00:34:28.089827635Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.657597873s" Nov 8 00:34:28.089920 containerd[2015]: time="2025-11-08T00:34:28.089861207Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 8 00:34:28.090445 containerd[2015]: time="2025-11-08T00:34:28.090408234Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:34:28.602065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2713694020.mount: Deactivated successfully. Nov 8 00:34:28.614493 containerd[2015]: time="2025-11-08T00:34:28.614426139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:34:28.616962 containerd[2015]: time="2025-11-08T00:34:28.616760160Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 8 00:34:28.619136 containerd[2015]: time="2025-11-08T00:34:28.619053872Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:34:28.623189 containerd[2015]: time="2025-11-08T00:34:28.623116125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:34:28.624667 containerd[2015]: time="2025-11-08T00:34:28.623989316Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 533.548525ms" Nov 8 00:34:28.624667 containerd[2015]: time="2025-11-08T00:34:28.624024121Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 00:34:28.624667 containerd[2015]: time="2025-11-08T00:34:28.624578238Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 8 00:34:29.202001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3969530625.mount: Deactivated successfully. Nov 8 00:34:31.235086 containerd[2015]: time="2025-11-08T00:34:31.235027674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:34:31.236646 containerd[2015]: time="2025-11-08T00:34:31.236414563Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 8 00:34:31.238071 containerd[2015]: time="2025-11-08T00:34:31.238013292Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:34:31.241815 containerd[2015]: time="2025-11-08T00:34:31.241423431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:34:31.242549 containerd[2015]: time="2025-11-08T00:34:31.242512552Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.617907006s" Nov 8 00:34:31.242654 containerd[2015]: time="2025-11-08T00:34:31.242552212Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 8 00:34:33.788205 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:34:33.794948 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:34:33.834333 systemd[1]: Reloading requested from client PID 2750 ('systemctl') (unit session-7.scope)... Nov 8 00:34:33.834351 systemd[1]: Reloading... Nov 8 00:34:33.967143 zram_generator::config[2791]: No configuration found. Nov 8 00:34:34.132556 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:34:34.216728 systemd[1]: Reloading finished in 381 ms. Nov 8 00:34:34.251408 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:34:34.251540 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:34:34.252314 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:34:34.257865 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:34:34.497676 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:34:34.500115 (kubelet)[2864]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:34:34.553326 kubelet[2864]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:34:34.553326 kubelet[2864]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:34:34.553326 kubelet[2864]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:34:34.553326 kubelet[2864]: I1108 00:34:34.553064 2864 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:34:34.882818 kubelet[2864]: I1108 00:34:34.882617 2864 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:34:34.882818 kubelet[2864]: I1108 00:34:34.882668 2864 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:34:34.883012 kubelet[2864]: I1108 00:34:34.882972 2864 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:34:34.938515 kubelet[2864]: E1108 00:34:34.938469 2864 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.19.248:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.248:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:34:34.940605 kubelet[2864]: I1108 00:34:34.940568 2864 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:34:34.960720 kubelet[2864]: E1108 00:34:34.960674 2864 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:34:34.960720 kubelet[2864]: I1108 00:34:34.960709 2864 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:34:34.964692 kubelet[2864]: I1108 00:34:34.964665 2864 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:34:34.970569 kubelet[2864]: I1108 00:34:34.970493 2864 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:34:34.970796 kubelet[2864]: I1108 00:34:34.970563 2864 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-248","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 8 00:34:34.973075 kubelet[2864]: I1108 00:34:34.973039 2864 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:34:34.973075 kubelet[2864]: I1108 00:34:34.973076 2864 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:34:34.975206 kubelet[2864]: I1108 00:34:34.975167 2864 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:34:34.980074 kubelet[2864]: I1108 00:34:34.979969 2864 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:34:34.980074 kubelet[2864]: I1108 00:34:34.980006 2864 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:34:34.980074 kubelet[2864]: I1108 00:34:34.980029 2864 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:34:34.980074 kubelet[2864]: I1108 00:34:34.980039 2864 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:34:34.988282 kubelet[2864]: W1108 00:34:34.988128 2864 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.248:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-248&limit=500&resourceVersion=0": dial tcp 172.31.19.248:6443: connect: connection refused Nov 8 00:34:34.988420 kubelet[2864]: E1108 00:34:34.988292 2864 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.248:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-248&limit=500&resourceVersion=0\": dial tcp 172.31.19.248:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:34:34.990020 kubelet[2864]: I1108 00:34:34.989989 2864 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:34:34.994131 kubelet[2864]: I1108 00:34:34.994098 2864 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:34:34.996241 kubelet[2864]: W1108 00:34:34.995344 2864 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:34:34.996241 kubelet[2864]: W1108 00:34:34.995795 2864 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.248:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.248:6443: connect: connection refused Nov 8 00:34:34.996241 kubelet[2864]: E1108 00:34:34.995859 2864 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.248:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.248:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:34:34.996413 kubelet[2864]: I1108 00:34:34.996298 2864 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:34:34.996413 kubelet[2864]: I1108 00:34:34.996335 2864 server.go:1287] "Started kubelet" Nov 8 00:34:34.996616 kubelet[2864]: I1108 00:34:34.996578 2864 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:34:34.999958 kubelet[2864]: I1108 00:34:34.999120 2864 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:34:35.003615 kubelet[2864]: I1108 00:34:35.003451 2864 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:34:35.004253 kubelet[2864]: I1108 00:34:35.004208 2864 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:34:35.004461 kubelet[2864]: I1108 00:34:35.004435 2864 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:34:35.011399 kubelet[2864]: I1108 00:34:35.011200 2864 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:34:35.014026 kubelet[2864]: I1108 00:34:35.014005 2864 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:34:35.015054 kubelet[2864]: E1108 00:34:35.014498 2864 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-248\" not found" Nov 8 00:34:35.016215 kubelet[2864]: E1108 00:34:35.005926 2864 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.248:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.248:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-248.1875e0de66bafe1d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-248,UID:ip-172-31-19-248,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-248,},FirstTimestamp:2025-11-08 00:34:34.996309533 +0000 UTC m=+0.492884957,LastTimestamp:2025-11-08 00:34:34.996309533 +0000 UTC m=+0.492884957,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-248,}" Nov 8 00:34:35.019757 kubelet[2864]: I1108 00:34:35.019735 2864 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:34:35.021702 kubelet[2864]: I1108 00:34:35.019992 2864 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:34:35.021702 kubelet[2864]: E1108 00:34:35.021079 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.248:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-248?timeout=10s\": dial tcp 172.31.19.248:6443: connect: connection refused" interval="200ms" Nov 8 00:34:35.021702 kubelet[2864]: W1108 00:34:35.021503 2864 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.248:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.248:6443: connect: connection refused Nov 8 00:34:35.021702 kubelet[2864]: E1108 00:34:35.021541 2864 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.248:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.248:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:34:35.024126 kubelet[2864]: I1108 00:34:35.024081 2864 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:34:35.024387 kubelet[2864]: I1108 00:34:35.024178 2864 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:34:35.031853 kubelet[2864]: I1108 00:34:35.030210 2864 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:34:35.032808 kubelet[2864]: E1108 00:34:35.032784 2864 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:34:35.039973 kubelet[2864]: I1108 00:34:35.039818 2864 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:34:35.041270 kubelet[2864]: I1108 00:34:35.041250 2864 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:34:35.041983 kubelet[2864]: I1108 00:34:35.041375 2864 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:34:35.041983 kubelet[2864]: I1108 00:34:35.041407 2864 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:34:35.041983 kubelet[2864]: I1108 00:34:35.041414 2864 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:34:35.041983 kubelet[2864]: E1108 00:34:35.041461 2864 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:34:35.057910 kubelet[2864]: W1108 00:34:35.057746 2864 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.248:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.248:6443: connect: connection refused Nov 8 00:34:35.057910 kubelet[2864]: E1108 00:34:35.057806 2864 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.248:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.248:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:34:35.062508 kubelet[2864]: I1108 00:34:35.062417 2864 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:34:35.065734 kubelet[2864]: I1108 00:34:35.062657 2864 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:34:35.065734 kubelet[2864]: I1108 00:34:35.062681 2864 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:34:35.065734 kubelet[2864]: I1108 00:34:35.065367 2864 policy_none.go:49] "None policy: Start" Nov 8 00:34:35.065734 kubelet[2864]: I1108 00:34:35.065387 2864 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:34:35.065734 kubelet[2864]: I1108 00:34:35.065398 2864 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:34:35.070198 kubelet[2864]: I1108 00:34:35.070168 2864 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:34:35.070355 kubelet[2864]: I1108 00:34:35.070340 2864 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:34:35.070390 kubelet[2864]: I1108 00:34:35.070355 2864 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:34:35.071708 kubelet[2864]: I1108 00:34:35.071677 2864 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:34:35.075396 kubelet[2864]: E1108 00:34:35.075367 2864 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:34:35.075587 kubelet[2864]: E1108 00:34:35.075422 2864 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-248\" not found" Nov 8 00:34:35.149466 kubelet[2864]: E1108 00:34:35.147463 2864 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-248\" not found" node="ip-172-31-19-248" Nov 8 00:34:35.149466 kubelet[2864]: E1108 00:34:35.149088 2864 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-248\" not found" node="ip-172-31-19-248" Nov 8 00:34:35.153393 kubelet[2864]: E1108 00:34:35.152587 2864 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-248\" not found" node="ip-172-31-19-248" Nov 8 00:34:35.172943 kubelet[2864]: I1108 00:34:35.172912 2864 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-248" Nov 8 00:34:35.173351 kubelet[2864]: E1108 00:34:35.173302 2864 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.248:6443/api/v1/nodes\": dial tcp 172.31.19.248:6443: connect: connection refused" node="ip-172-31-19-248" Nov 8 00:34:35.222159 kubelet[2864]: E1108 00:34:35.222111 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.248:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-248?timeout=10s\": dial tcp 172.31.19.248:6443: connect: connection refused" interval="400ms" Nov 8 00:34:35.320933 kubelet[2864]: I1108 00:34:35.320893 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2ca3a28a6bd349697a377874e7193988-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-248\" (UID: \"2ca3a28a6bd349697a377874e7193988\") " pod="kube-system/kube-scheduler-ip-172-31-19-248" Nov 8 00:34:35.320933 kubelet[2864]: I1108 00:34:35.320933 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5d9b5afd02568a7235a155531ccd3186-ca-certs\") pod \"kube-apiserver-ip-172-31-19-248\" (UID: \"5d9b5afd02568a7235a155531ccd3186\") " pod="kube-system/kube-apiserver-ip-172-31-19-248" Nov 8 00:34:35.320933 kubelet[2864]: I1108 00:34:35.320953 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5d9b5afd02568a7235a155531ccd3186-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-248\" (UID: \"5d9b5afd02568a7235a155531ccd3186\") " pod="kube-system/kube-apiserver-ip-172-31-19-248" Nov 8 00:34:35.320933 kubelet[2864]: I1108 00:34:35.320971 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5d9b5afd02568a7235a155531ccd3186-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-248\" (UID: \"5d9b5afd02568a7235a155531ccd3186\") " pod="kube-system/kube-apiserver-ip-172-31-19-248" Nov 8 00:34:35.321440 kubelet[2864]: I1108 00:34:35.321403 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19663d98ee15584f7524cba2b2dd4ba7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-248\" (UID: \"19663d98ee15584f7524cba2b2dd4ba7\") " pod="kube-system/kube-controller-manager-ip-172-31-19-248" Nov 8 00:34:35.321440 kubelet[2864]: I1108 00:34:35.321433 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19663d98ee15584f7524cba2b2dd4ba7-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-248\" (UID: \"19663d98ee15584f7524cba2b2dd4ba7\") " pod="kube-system/kube-controller-manager-ip-172-31-19-248" Nov 8 00:34:35.321571 kubelet[2864]: I1108 00:34:35.321449 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/19663d98ee15584f7524cba2b2dd4ba7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-248\" (UID: \"19663d98ee15584f7524cba2b2dd4ba7\") " pod="kube-system/kube-controller-manager-ip-172-31-19-248" Nov 8 00:34:35.321571 kubelet[2864]: I1108 00:34:35.321467 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19663d98ee15584f7524cba2b2dd4ba7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-248\" (UID: \"19663d98ee15584f7524cba2b2dd4ba7\") " pod="kube-system/kube-controller-manager-ip-172-31-19-248" Nov 8 00:34:35.321571 kubelet[2864]: I1108 00:34:35.321489 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19663d98ee15584f7524cba2b2dd4ba7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-248\" (UID: \"19663d98ee15584f7524cba2b2dd4ba7\") " pod="kube-system/kube-controller-manager-ip-172-31-19-248" Nov 8 00:34:35.375767 kubelet[2864]: I1108 00:34:35.375299 2864 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-248" Nov 8 00:34:35.375767 kubelet[2864]: E1108 00:34:35.375734 2864 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.248:6443/api/v1/nodes\": dial tcp 172.31.19.248:6443: connect: connection refused" node="ip-172-31-19-248" Nov 8 00:34:35.450961 containerd[2015]: time="2025-11-08T00:34:35.450662695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-248,Uid:19663d98ee15584f7524cba2b2dd4ba7,Namespace:kube-system,Attempt:0,}" Nov 8 00:34:35.454221 containerd[2015]: time="2025-11-08T00:34:35.454184150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-248,Uid:2ca3a28a6bd349697a377874e7193988,Namespace:kube-system,Attempt:0,}" Nov 8 00:34:35.454455 containerd[2015]: time="2025-11-08T00:34:35.454432717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-248,Uid:5d9b5afd02568a7235a155531ccd3186,Namespace:kube-system,Attempt:0,}" Nov 8 00:34:35.622543 kubelet[2864]: E1108 00:34:35.622484 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.248:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-248?timeout=10s\": dial tcp 172.31.19.248:6443: connect: connection refused" interval="800ms" Nov 8 00:34:35.778112 kubelet[2864]: I1108 00:34:35.778021 2864 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-248" Nov 8 00:34:35.778349 kubelet[2864]: E1108 00:34:35.778324 2864 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.248:6443/api/v1/nodes\": dial tcp 172.31.19.248:6443: connect: connection refused" node="ip-172-31-19-248" Nov 8 00:34:35.875047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4291642798.mount: Deactivated successfully. Nov 8 00:34:35.881816 containerd[2015]: time="2025-11-08T00:34:35.881755786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:34:35.882724 containerd[2015]: time="2025-11-08T00:34:35.882692739Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:34:35.883693 containerd[2015]: time="2025-11-08T00:34:35.883654106Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 00:34:35.884691 containerd[2015]: time="2025-11-08T00:34:35.884655574Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:34:35.885724 containerd[2015]: time="2025-11-08T00:34:35.885692544Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:34:35.888643 containerd[2015]: time="2025-11-08T00:34:35.886963884Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:34:35.888643 containerd[2015]: time="2025-11-08T00:34:35.887686480Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:34:35.889704 containerd[2015]: time="2025-11-08T00:34:35.889677276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:34:35.891314 containerd[2015]: time="2025-11-08T00:34:35.891288334Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 440.552178ms" Nov 8 00:34:35.897263 containerd[2015]: time="2025-11-08T00:34:35.897216095Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 442.73806ms" Nov 8 00:34:35.897404 containerd[2015]: time="2025-11-08T00:34:35.897379991Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 443.120903ms" Nov 8 00:34:35.898681 kubelet[2864]: W1108 00:34:35.898586 2864 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.248:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.248:6443: connect: connection refused Nov 8 00:34:35.898681 kubelet[2864]: E1108 00:34:35.898646 2864 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.248:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.248:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:34:35.913390 kubelet[2864]: W1108 00:34:35.913327 2864 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.248:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.248:6443: connect: connection refused Nov 8 00:34:35.913390 kubelet[2864]: E1108 00:34:35.913394 2864 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.248:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.248:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:34:36.035733 kubelet[2864]: W1108 00:34:36.034990 2864 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.248:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.248:6443: connect: connection refused Nov 8 00:34:36.035733 kubelet[2864]: E1108 00:34:36.035053 2864 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.248:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.248:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:34:36.085505 containerd[2015]: time="2025-11-08T00:34:36.084522616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:34:36.085505 containerd[2015]: time="2025-11-08T00:34:36.084574138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:34:36.085505 containerd[2015]: time="2025-11-08T00:34:36.084607919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:34:36.085505 containerd[2015]: time="2025-11-08T00:34:36.084989875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:34:36.086679 containerd[2015]: time="2025-11-08T00:34:36.085864301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:34:36.086679 containerd[2015]: time="2025-11-08T00:34:36.085904722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:34:36.086679 containerd[2015]: time="2025-11-08T00:34:36.085919414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:34:36.086679 containerd[2015]: time="2025-11-08T00:34:36.086002194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:34:36.087404 containerd[2015]: time="2025-11-08T00:34:36.087185332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:34:36.087404 containerd[2015]: time="2025-11-08T00:34:36.087236237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:34:36.087404 containerd[2015]: time="2025-11-08T00:34:36.087251135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:34:36.087404 containerd[2015]: time="2025-11-08T00:34:36.087325405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:34:36.188395 containerd[2015]: time="2025-11-08T00:34:36.188354776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-248,Uid:5d9b5afd02568a7235a155531ccd3186,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fb967589e19232e387c1f05d3297e8fc7dddffbaa822b394dbbbae06fe4e05f\"" Nov 8 00:34:36.198991 containerd[2015]: time="2025-11-08T00:34:36.198950686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-248,Uid:2ca3a28a6bd349697a377874e7193988,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce71d163e6a3f2d12e8200af3e4b0f5270474b212601b19158dde4490d2c7942\"" Nov 8 00:34:36.205025 containerd[2015]: time="2025-11-08T00:34:36.204988791Z" level=info msg="CreateContainer within sandbox \"2fb967589e19232e387c1f05d3297e8fc7dddffbaa822b394dbbbae06fe4e05f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:34:36.207647 containerd[2015]: time="2025-11-08T00:34:36.207596376Z" level=info msg="CreateContainer within sandbox \"ce71d163e6a3f2d12e8200af3e4b0f5270474b212601b19158dde4490d2c7942\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:34:36.214270 containerd[2015]: time="2025-11-08T00:34:36.214156273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-248,Uid:19663d98ee15584f7524cba2b2dd4ba7,Namespace:kube-system,Attempt:0,} returns sandbox id \"793e3d34225c4fbd12400f513c974d8daff028589ff777c25fa475137e4c735a\"" Nov 8 00:34:36.218571 containerd[2015]: time="2025-11-08T00:34:36.218532433Z" level=info msg="CreateContainer within sandbox \"793e3d34225c4fbd12400f513c974d8daff028589ff777c25fa475137e4c735a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:34:36.240761 containerd[2015]: time="2025-11-08T00:34:36.240712260Z" level=info msg="CreateContainer within sandbox \"2fb967589e19232e387c1f05d3297e8fc7dddffbaa822b394dbbbae06fe4e05f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cae9dbc090af78e49f1bce9fbe781afb25002ac03343befa5ba89c228ccf2e40\"" Nov 8 00:34:36.242648 containerd[2015]: time="2025-11-08T00:34:36.242161230Z" level=info msg="CreateContainer within sandbox \"ce71d163e6a3f2d12e8200af3e4b0f5270474b212601b19158dde4490d2c7942\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"05692ec79b6ced6cd7b2c175d0bd176040d3f5bce6088cf759cc9866fb04610e\"" Nov 8 00:34:36.242648 containerd[2015]: time="2025-11-08T00:34:36.242454187Z" level=info msg="StartContainer for \"cae9dbc090af78e49f1bce9fbe781afb25002ac03343befa5ba89c228ccf2e40\"" Nov 8 00:34:36.244662 containerd[2015]: time="2025-11-08T00:34:36.243770637Z" level=info msg="CreateContainer within sandbox \"793e3d34225c4fbd12400f513c974d8daff028589ff777c25fa475137e4c735a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"53815cce14a84ec60e7227d23c8e91b7c943714fa7ecb0910eb148c0d0fb25b4\"" Nov 8 00:34:36.244662 containerd[2015]: time="2025-11-08T00:34:36.243959564Z" level=info msg="StartContainer for \"05692ec79b6ced6cd7b2c175d0bd176040d3f5bce6088cf759cc9866fb04610e\"" Nov 8 00:34:36.257428 containerd[2015]: time="2025-11-08T00:34:36.257387953Z" level=info msg="StartContainer for \"53815cce14a84ec60e7227d23c8e91b7c943714fa7ecb0910eb148c0d0fb25b4\"" Nov 8 00:34:36.387590 containerd[2015]: time="2025-11-08T00:34:36.387354999Z" level=info msg="StartContainer for \"05692ec79b6ced6cd7b2c175d0bd176040d3f5bce6088cf759cc9866fb04610e\" returns successfully" Nov 8 00:34:36.422476 containerd[2015]: time="2025-11-08T00:34:36.422435042Z" level=info msg="StartContainer for \"53815cce14a84ec60e7227d23c8e91b7c943714fa7ecb0910eb148c0d0fb25b4\" returns successfully" Nov 8 00:34:36.422799 containerd[2015]: time="2025-11-08T00:34:36.422713319Z" level=info msg="StartContainer for \"cae9dbc090af78e49f1bce9fbe781afb25002ac03343befa5ba89c228ccf2e40\" returns successfully" Nov 8 00:34:36.424245 kubelet[2864]: E1108 00:34:36.424210 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.248:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-248?timeout=10s\": dial tcp 172.31.19.248:6443: connect: connection refused" interval="1.6s" Nov 8 00:34:36.424658 kubelet[2864]: W1108 00:34:36.424561 2864 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.248:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-248&limit=500&resourceVersion=0": dial tcp 172.31.19.248:6443: connect: connection refused Nov 8 00:34:36.424789 kubelet[2864]: E1108 00:34:36.424767 2864 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.248:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-248&limit=500&resourceVersion=0\": dial tcp 172.31.19.248:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:34:36.580641 kubelet[2864]: I1108 00:34:36.580181 2864 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-248" Nov 8 00:34:36.580641 kubelet[2864]: E1108 00:34:36.580518 2864 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.248:6443/api/v1/nodes\": dial tcp 172.31.19.248:6443: connect: connection refused" node="ip-172-31-19-248" Nov 8 00:34:37.083411 kubelet[2864]: E1108 00:34:37.083374 2864 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-248\" not found" node="ip-172-31-19-248" Nov 8 00:34:37.091657 kubelet[2864]: E1108 00:34:37.091610 2864 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-248\" not found" node="ip-172-31-19-248" Nov 8 00:34:37.094639 kubelet[2864]: E1108 00:34:37.092737 2864 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-248\" not found" node="ip-172-31-19-248" Nov 8 00:34:37.121369 kubelet[2864]: E1108 00:34:37.121325 2864 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.19.248:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.248:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:34:38.098320 kubelet[2864]: E1108 00:34:38.098283 2864 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-248\" not found" node="ip-172-31-19-248" Nov 8 00:34:38.100645 kubelet[2864]: E1108 00:34:38.099069 2864 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-248\" not found" node="ip-172-31-19-248" Nov 8 00:34:38.183862 kubelet[2864]: I1108 00:34:38.183832 2864 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-248" Nov 8 00:34:39.323839 kubelet[2864]: E1108 00:34:39.323800 2864 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-248\" not found" node="ip-172-31-19-248" Nov 8 00:34:39.473281 kubelet[2864]: I1108 00:34:39.472037 2864 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-19-248" Nov 8 00:34:39.473281 kubelet[2864]: E1108 00:34:39.472075 2864 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-19-248\": node \"ip-172-31-19-248\" not found" Nov 8 00:34:39.500538 kubelet[2864]: E1108 00:34:39.500470 2864 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-248\" not found" Nov 8 00:34:39.615208 kubelet[2864]: I1108 00:34:39.615080 2864 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-248" Nov 8 00:34:39.621194 kubelet[2864]: E1108 00:34:39.621156 2864 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-248\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-19-248" Nov 8 00:34:39.621194 kubelet[2864]: I1108 00:34:39.621187 2864 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-248" Nov 8 00:34:39.622838 kubelet[2864]: E1108 00:34:39.622802 2864 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-19-248\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-19-248" Nov 8 00:34:39.622838 kubelet[2864]: I1108 00:34:39.622833 2864 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-248" Nov 8 00:34:39.624781 kubelet[2864]: E1108 00:34:39.624745 2864 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-19-248\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-19-248" Nov 8 00:34:39.997161 kubelet[2864]: I1108 00:34:39.997127 2864 apiserver.go:52] "Watching apiserver" Nov 8 00:34:40.021038 kubelet[2864]: I1108 00:34:40.020998 2864 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:34:41.482484 systemd[1]: Reloading requested from client PID 3136 ('systemctl') (unit session-7.scope)... Nov 8 00:34:41.482505 systemd[1]: Reloading... Nov 8 00:34:41.567652 zram_generator::config[3177]: No configuration found. Nov 8 00:34:41.712418 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:34:41.853851 systemd[1]: Reloading finished in 370 ms. Nov 8 00:34:41.899884 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:34:41.914053 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:34:41.914396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:34:41.919903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:34:42.110224 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 8 00:34:42.198315 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:34:42.212348 (kubelet)[3251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:34:42.286887 kubelet[3251]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:34:42.286887 kubelet[3251]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:34:42.286887 kubelet[3251]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:34:42.287309 kubelet[3251]: I1108 00:34:42.286910 3251 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:34:42.294390 kubelet[3251]: I1108 00:34:42.293918 3251 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:34:42.294390 kubelet[3251]: I1108 00:34:42.293940 3251 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:34:42.294390 kubelet[3251]: I1108 00:34:42.294182 3251 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:34:42.305277 kubelet[3251]: I1108 00:34:42.305234 3251 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 8 00:34:42.310063 kubelet[3251]: I1108 00:34:42.309833 3251 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:34:42.330824 kubelet[3251]: E1108 00:34:42.330303 3251 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:34:42.330824 kubelet[3251]: I1108 00:34:42.330350 3251 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:34:42.333380 kubelet[3251]: I1108 00:34:42.333359 3251 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:34:42.334004 kubelet[3251]: I1108 00:34:42.333976 3251 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:34:42.334258 kubelet[3251]: I1108 00:34:42.334089 3251 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-248","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 8 00:34:42.334386 kubelet[3251]: I1108 00:34:42.334376 3251 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:34:42.334432 kubelet[3251]: I1108 00:34:42.334427 3251 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:34:42.334510 kubelet[3251]: I1108 00:34:42.334504 3251 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:34:42.334704 kubelet[3251]: I1108 00:34:42.334696 3251 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:34:42.335150 kubelet[3251]: I1108 00:34:42.335139 3251 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:34:42.335319 kubelet[3251]: I1108 00:34:42.335311 3251 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:34:42.335525 kubelet[3251]: I1108 00:34:42.335461 3251 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:34:42.348244 kubelet[3251]: I1108 00:34:42.348220 3251 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:34:42.348903 kubelet[3251]: I1108 00:34:42.348889 3251 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:34:42.355638 kubelet[3251]: I1108 00:34:42.354455 3251 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:34:42.355638 kubelet[3251]: I1108 00:34:42.354494 3251 server.go:1287] "Started kubelet" Nov 8 00:34:42.361375 kubelet[3251]: I1108 00:34:42.360658 3251 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:34:42.361771 kubelet[3251]: I1108 00:34:42.361743 3251 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:34:42.368994 kubelet[3251]: I1108 00:34:42.367754 3251 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:34:42.369394 kubelet[3251]: I1108 00:34:42.369377 3251 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:34:42.370357 kubelet[3251]: I1108 00:34:42.370323 3251 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:34:42.371620 kubelet[3251]: I1108 00:34:42.371603 3251 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:34:42.372118 kubelet[3251]: I1108 00:34:42.372085 3251 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:34:42.374166 kubelet[3251]: I1108 00:34:42.374146 3251 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:34:42.374416 kubelet[3251]: I1108 00:34:42.374405 3251 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:34:42.384471 kubelet[3251]: I1108 00:34:42.384075 3251 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:34:42.384471 kubelet[3251]: I1108 00:34:42.384195 3251 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:34:42.388259 kubelet[3251]: I1108 00:34:42.388233 3251 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:34:42.390373 kubelet[3251]: I1108 00:34:42.390278 3251 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:34:42.393616 kubelet[3251]: I1108 00:34:42.393585 3251 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:34:42.394026 kubelet[3251]: I1108 00:34:42.393877 3251 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:34:42.394026 kubelet[3251]: I1108 00:34:42.393911 3251 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:34:42.394026 kubelet[3251]: I1108 00:34:42.393921 3251 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:34:42.396568 kubelet[3251]: E1108 00:34:42.396247 3251 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:34:42.418026 kubelet[3251]: E1108 00:34:42.417969 3251 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:34:42.481611 kubelet[3251]: I1108 00:34:42.481583 3251 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:34:42.481611 kubelet[3251]: I1108 00:34:42.481601 3251 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:34:42.481829 kubelet[3251]: I1108 00:34:42.481743 3251 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:34:42.482010 kubelet[3251]: I1108 00:34:42.481985 3251 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:34:42.482085 kubelet[3251]: I1108 00:34:42.482003 3251 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:34:42.482085 kubelet[3251]: I1108 00:34:42.482036 3251 policy_none.go:49] "None policy: Start" Nov 8 00:34:42.482085 kubelet[3251]: I1108 00:34:42.482049 3251 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:34:42.482085 kubelet[3251]: I1108 00:34:42.482063 3251 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:34:42.482246 kubelet[3251]: I1108 00:34:42.482215 3251 state_mem.go:75] "Updated machine memory state" Nov 8 00:34:42.485677 kubelet[3251]: I1108 00:34:42.485652 3251 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:34:42.488252 kubelet[3251]: I1108 00:34:42.485998 3251 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:34:42.488252 kubelet[3251]: I1108 00:34:42.486017 3251 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:34:42.488252 kubelet[3251]: I1108 00:34:42.486898 3251 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:34:42.493846 kubelet[3251]: E1108 00:34:42.493820 3251 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:34:42.497411 kubelet[3251]: I1108 00:34:42.497373 3251 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-248" Nov 8 00:34:42.500578 kubelet[3251]: I1108 00:34:42.499077 3251 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-248" Nov 8 00:34:42.500578 kubelet[3251]: I1108 00:34:42.499521 3251 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-248" Nov 8 00:34:42.575696 kubelet[3251]: I1108 00:34:42.575362 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19663d98ee15584f7524cba2b2dd4ba7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-248\" (UID: \"19663d98ee15584f7524cba2b2dd4ba7\") " pod="kube-system/kube-controller-manager-ip-172-31-19-248" Nov 8 00:34:42.575696 kubelet[3251]: I1108 00:34:42.575400 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19663d98ee15584f7524cba2b2dd4ba7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-248\" (UID: \"19663d98ee15584f7524cba2b2dd4ba7\") " pod="kube-system/kube-controller-manager-ip-172-31-19-248" Nov 8 00:34:42.575696 kubelet[3251]: I1108 00:34:42.575498 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2ca3a28a6bd349697a377874e7193988-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-248\" (UID: \"2ca3a28a6bd349697a377874e7193988\") " pod="kube-system/kube-scheduler-ip-172-31-19-248" Nov 8 00:34:42.575696 kubelet[3251]: I1108 00:34:42.575516 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5d9b5afd02568a7235a155531ccd3186-ca-certs\") pod \"kube-apiserver-ip-172-31-19-248\" (UID: \"5d9b5afd02568a7235a155531ccd3186\") " pod="kube-system/kube-apiserver-ip-172-31-19-248" Nov 8 00:34:42.575696 kubelet[3251]: I1108 00:34:42.575531 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/19663d98ee15584f7524cba2b2dd4ba7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-248\" (UID: \"19663d98ee15584f7524cba2b2dd4ba7\") " pod="kube-system/kube-controller-manager-ip-172-31-19-248" Nov 8 00:34:42.576021 kubelet[3251]: I1108 00:34:42.575550 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19663d98ee15584f7524cba2b2dd4ba7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-248\" (UID: \"19663d98ee15584f7524cba2b2dd4ba7\") " pod="kube-system/kube-controller-manager-ip-172-31-19-248" Nov 8 00:34:42.576021 kubelet[3251]: I1108 00:34:42.575565 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5d9b5afd02568a7235a155531ccd3186-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-248\" (UID: \"5d9b5afd02568a7235a155531ccd3186\") " pod="kube-system/kube-apiserver-ip-172-31-19-248" Nov 8 00:34:42.576021 kubelet[3251]: I1108 00:34:42.575581 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5d9b5afd02568a7235a155531ccd3186-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-248\" (UID: \"5d9b5afd02568a7235a155531ccd3186\") " pod="kube-system/kube-apiserver-ip-172-31-19-248" Nov 8 00:34:42.576021 kubelet[3251]: I1108 00:34:42.575596 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19663d98ee15584f7524cba2b2dd4ba7-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-248\" (UID: \"19663d98ee15584f7524cba2b2dd4ba7\") " pod="kube-system/kube-controller-manager-ip-172-31-19-248" Nov 8 00:34:42.597865 kubelet[3251]: I1108 00:34:42.597708 3251 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-248" Nov 8 00:34:42.608536 kubelet[3251]: I1108 00:34:42.608488 3251 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-19-248" Nov 8 00:34:42.609399 kubelet[3251]: I1108 00:34:42.609255 3251 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-19-248" Nov 8 00:34:43.338310 kubelet[3251]: I1108 00:34:43.338275 3251 apiserver.go:52] "Watching apiserver" Nov 8 00:34:43.375033 kubelet[3251]: I1108 00:34:43.374876 3251 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:34:43.438650 kubelet[3251]: I1108 00:34:43.437331 3251 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-248" Nov 8 00:34:43.450031 kubelet[3251]: E1108 00:34:43.449020 3251 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-248\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-248" Nov 8 00:34:43.479293 kubelet[3251]: I1108 00:34:43.479205 3251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-248" podStartSLOduration=1.479170225 podStartE2EDuration="1.479170225s" podCreationTimestamp="2025-11-08 00:34:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:34:43.477258067 +0000 UTC m=+1.257444019" watchObservedRunningTime="2025-11-08 00:34:43.479170225 +0000 UTC m=+1.259356188" Nov 8 00:34:43.522664 kubelet[3251]: I1108 00:34:43.521303 3251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-248" podStartSLOduration=1.521281721 podStartE2EDuration="1.521281721s" podCreationTimestamp="2025-11-08 00:34:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:34:43.501180757 +0000 UTC m=+1.281366712" watchObservedRunningTime="2025-11-08 00:34:43.521281721 +0000 UTC m=+1.301467657" Nov 8 00:34:43.549209 kubelet[3251]: I1108 00:34:43.549143 3251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-248" podStartSLOduration=1.549123961 podStartE2EDuration="1.549123961s" podCreationTimestamp="2025-11-08 00:34:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:34:43.522086681 +0000 UTC m=+1.302272634" watchObservedRunningTime="2025-11-08 00:34:43.549123961 +0000 UTC m=+1.329309914" Nov 8 00:34:46.342343 kubelet[3251]: I1108 00:34:46.341869 3251 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:34:46.342952 containerd[2015]: time="2025-11-08T00:34:46.342914323Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:34:46.343718 kubelet[3251]: I1108 00:34:46.343101 3251 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:34:47.105548 kubelet[3251]: I1108 00:34:47.105476 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d15c2be-b6d7-422f-bdb7-30241a239241-lib-modules\") pod \"kube-proxy-nrmwm\" (UID: \"0d15c2be-b6d7-422f-bdb7-30241a239241\") " pod="kube-system/kube-proxy-nrmwm" Nov 8 00:34:47.106206 kubelet[3251]: I1108 00:34:47.105558 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0d15c2be-b6d7-422f-bdb7-30241a239241-kube-proxy\") pod \"kube-proxy-nrmwm\" (UID: \"0d15c2be-b6d7-422f-bdb7-30241a239241\") " pod="kube-system/kube-proxy-nrmwm" Nov 8 00:34:47.106206 kubelet[3251]: I1108 00:34:47.105579 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d15c2be-b6d7-422f-bdb7-30241a239241-xtables-lock\") pod \"kube-proxy-nrmwm\" (UID: \"0d15c2be-b6d7-422f-bdb7-30241a239241\") " pod="kube-system/kube-proxy-nrmwm" Nov 8 00:34:47.106206 kubelet[3251]: I1108 00:34:47.105596 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68s89\" (UniqueName: \"kubernetes.io/projected/0d15c2be-b6d7-422f-bdb7-30241a239241-kube-api-access-68s89\") pod \"kube-proxy-nrmwm\" (UID: \"0d15c2be-b6d7-422f-bdb7-30241a239241\") " pod="kube-system/kube-proxy-nrmwm" Nov 8 00:34:47.327461 containerd[2015]: time="2025-11-08T00:34:47.327339694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nrmwm,Uid:0d15c2be-b6d7-422f-bdb7-30241a239241,Namespace:kube-system,Attempt:0,}" Nov 8 00:34:47.358273 containerd[2015]: time="2025-11-08T00:34:47.358079193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:34:47.358273 containerd[2015]: time="2025-11-08T00:34:47.358142622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:34:47.358273 containerd[2015]: time="2025-11-08T00:34:47.358166874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:34:47.358877 containerd[2015]: time="2025-11-08T00:34:47.358774729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:34:47.422095 containerd[2015]: time="2025-11-08T00:34:47.422020316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nrmwm,Uid:0d15c2be-b6d7-422f-bdb7-30241a239241,Namespace:kube-system,Attempt:0,} returns sandbox id \"395477ac17a813b95a3fb35e5fbd58d0c06408f67cf2df312659e4a21f61afc2\"" Nov 8 00:34:47.427812 containerd[2015]: time="2025-11-08T00:34:47.427656003Z" level=info msg="CreateContainer within sandbox \"395477ac17a813b95a3fb35e5fbd58d0c06408f67cf2df312659e4a21f61afc2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:34:47.467776 containerd[2015]: time="2025-11-08T00:34:47.467607249Z" level=info msg="CreateContainer within sandbox \"395477ac17a813b95a3fb35e5fbd58d0c06408f67cf2df312659e4a21f61afc2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2e80db4e1ec32c43bf5d45fb8ef68e422079d9f45923d38db3e59ad6565c695c\"" Nov 8 00:34:47.468966 containerd[2015]: time="2025-11-08T00:34:47.468797100Z" level=info msg="StartContainer for \"2e80db4e1ec32c43bf5d45fb8ef68e422079d9f45923d38db3e59ad6565c695c\"" Nov 8 00:34:47.510493 kubelet[3251]: I1108 00:34:47.510358 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rchr\" (UniqueName: \"kubernetes.io/projected/41717960-a796-4d88-b39b-253ce3f3dc3e-kube-api-access-4rchr\") pod \"tigera-operator-7dcd859c48-btbvj\" (UID: \"41717960-a796-4d88-b39b-253ce3f3dc3e\") " pod="tigera-operator/tigera-operator-7dcd859c48-btbvj" Nov 8 00:34:47.510493 kubelet[3251]: I1108 00:34:47.510417 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/41717960-a796-4d88-b39b-253ce3f3dc3e-var-lib-calico\") pod \"tigera-operator-7dcd859c48-btbvj\" (UID: \"41717960-a796-4d88-b39b-253ce3f3dc3e\") " pod="tigera-operator/tigera-operator-7dcd859c48-btbvj" Nov 8 00:34:47.557800 containerd[2015]: time="2025-11-08T00:34:47.557748180Z" level=info msg="StartContainer for \"2e80db4e1ec32c43bf5d45fb8ef68e422079d9f45923d38db3e59ad6565c695c\" returns successfully" Nov 8 00:34:47.742415 containerd[2015]: time="2025-11-08T00:34:47.742380614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-btbvj,Uid:41717960-a796-4d88-b39b-253ce3f3dc3e,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:34:47.777682 containerd[2015]: time="2025-11-08T00:34:47.777359202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:34:47.777682 containerd[2015]: time="2025-11-08T00:34:47.777450805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:34:47.777682 containerd[2015]: time="2025-11-08T00:34:47.777469034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:34:47.777682 containerd[2015]: time="2025-11-08T00:34:47.777564385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:34:47.836948 containerd[2015]: time="2025-11-08T00:34:47.836898011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-btbvj,Uid:41717960-a796-4d88-b39b-253ce3f3dc3e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8c9d6d3be4be5cd43be27b95299f5d2539763c38d7bbc1b908064ece17d13daa\"" Nov 8 00:34:47.838998 containerd[2015]: time="2025-11-08T00:34:47.838775371Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:34:48.226702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2314415493.mount: Deactivated successfully. Nov 8 00:34:48.466427 kubelet[3251]: I1108 00:34:48.466346 3251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nrmwm" podStartSLOduration=1.466328913 podStartE2EDuration="1.466328913s" podCreationTimestamp="2025-11-08 00:34:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:34:48.465598351 +0000 UTC m=+6.245784298" watchObservedRunningTime="2025-11-08 00:34:48.466328913 +0000 UTC m=+6.246514865" Nov 8 00:34:49.116232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount946013013.mount: Deactivated successfully. Nov 8 00:34:50.028100 containerd[2015]: time="2025-11-08T00:34:50.028026437Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:34:50.030169 containerd[2015]: time="2025-11-08T00:34:50.030096953Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:34:50.032369 containerd[2015]: time="2025-11-08T00:34:50.032303778Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:34:50.035924 containerd[2015]: time="2025-11-08T00:34:50.035862929Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:34:50.036817 containerd[2015]: time="2025-11-08T00:34:50.036693602Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.197880414s" Nov 8 00:34:50.036817 containerd[2015]: time="2025-11-08T00:34:50.036731634Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:34:50.044655 containerd[2015]: time="2025-11-08T00:34:50.044566015Z" level=info msg="CreateContainer within sandbox \"8c9d6d3be4be5cd43be27b95299f5d2539763c38d7bbc1b908064ece17d13daa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:34:50.077279 containerd[2015]: time="2025-11-08T00:34:50.077206048Z" level=info msg="CreateContainer within sandbox \"8c9d6d3be4be5cd43be27b95299f5d2539763c38d7bbc1b908064ece17d13daa\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"34ae0244bc2b9cb702c62fd142768cc9f7edc5718f6136b479a719e28648d9e6\"" Nov 8 00:34:50.078085 containerd[2015]: time="2025-11-08T00:34:50.078035908Z" level=info msg="StartContainer for \"34ae0244bc2b9cb702c62fd142768cc9f7edc5718f6136b479a719e28648d9e6\"" Nov 8 00:34:50.107297 systemd[1]: run-containerd-runc-k8s.io-34ae0244bc2b9cb702c62fd142768cc9f7edc5718f6136b479a719e28648d9e6-runc.VHbumC.mount: Deactivated successfully. Nov 8 00:34:50.142591 containerd[2015]: time="2025-11-08T00:34:50.142549881Z" level=info msg="StartContainer for \"34ae0244bc2b9cb702c62fd142768cc9f7edc5718f6136b479a719e28648d9e6\" returns successfully" Nov 8 00:34:51.403214 kubelet[3251]: I1108 00:34:51.402697 3251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-btbvj" podStartSLOduration=2.20316601 podStartE2EDuration="4.402675969s" podCreationTimestamp="2025-11-08 00:34:47 +0000 UTC" firstStartedPulling="2025-11-08 00:34:47.838301702 +0000 UTC m=+5.618487646" lastFinishedPulling="2025-11-08 00:34:50.037811673 +0000 UTC m=+7.817997605" observedRunningTime="2025-11-08 00:34:50.474146989 +0000 UTC m=+8.254332942" watchObservedRunningTime="2025-11-08 00:34:51.402675969 +0000 UTC m=+9.182861922" Nov 8 00:34:56.505655 update_engine[1992]: I20251108 00:34:56.504930 1992 update_attempter.cc:509] Updating boot flags... Nov 8 00:34:56.673650 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3632) Nov 8 00:34:57.049197 sudo[2364]: pam_unix(sudo:session): session closed for user root Nov 8 00:34:57.076770 sshd[2360]: pam_unix(sshd:session): session closed for user core Nov 8 00:34:57.104446 systemd[1]: sshd@6-172.31.19.248:22-139.178.89.65:32796.service: Deactivated successfully. Nov 8 00:34:57.111578 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:34:57.122182 systemd-logind[1990]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:34:57.132782 systemd-logind[1990]: Removed session 7. Nov 8 00:34:57.175743 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3631) Nov 8 00:35:04.976844 kubelet[3251]: I1108 00:35:04.976324 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l76pn\" (UniqueName: \"kubernetes.io/projected/e419dab0-4092-4dc1-b778-c17a7f4c3d76-kube-api-access-l76pn\") pod \"calico-typha-7f7c9bd69d-5nfs8\" (UID: \"e419dab0-4092-4dc1-b778-c17a7f4c3d76\") " pod="calico-system/calico-typha-7f7c9bd69d-5nfs8" Nov 8 00:35:04.976844 kubelet[3251]: I1108 00:35:04.976498 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e419dab0-4092-4dc1-b778-c17a7f4c3d76-tigera-ca-bundle\") pod \"calico-typha-7f7c9bd69d-5nfs8\" (UID: \"e419dab0-4092-4dc1-b778-c17a7f4c3d76\") " pod="calico-system/calico-typha-7f7c9bd69d-5nfs8" Nov 8 00:35:04.976844 kubelet[3251]: I1108 00:35:04.976574 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e419dab0-4092-4dc1-b778-c17a7f4c3d76-typha-certs\") pod \"calico-typha-7f7c9bd69d-5nfs8\" (UID: \"e419dab0-4092-4dc1-b778-c17a7f4c3d76\") " pod="calico-system/calico-typha-7f7c9bd69d-5nfs8" Nov 8 00:35:05.077434 kubelet[3251]: I1108 00:35:05.077380 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/355e17e3-9ce7-445e-9a11-06aac68336e6-cni-net-dir\") pod \"calico-node-2lt6d\" (UID: \"355e17e3-9ce7-445e-9a11-06aac68336e6\") " pod="calico-system/calico-node-2lt6d" Nov 8 00:35:05.077434 kubelet[3251]: I1108 00:35:05.077434 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/355e17e3-9ce7-445e-9a11-06aac68336e6-flexvol-driver-host\") pod \"calico-node-2lt6d\" (UID: \"355e17e3-9ce7-445e-9a11-06aac68336e6\") " pod="calico-system/calico-node-2lt6d" Nov 8 00:35:05.078867 kubelet[3251]: I1108 00:35:05.077462 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/355e17e3-9ce7-445e-9a11-06aac68336e6-var-lib-calico\") pod \"calico-node-2lt6d\" (UID: \"355e17e3-9ce7-445e-9a11-06aac68336e6\") " pod="calico-system/calico-node-2lt6d" Nov 8 00:35:05.078867 kubelet[3251]: I1108 00:35:05.077501 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/355e17e3-9ce7-445e-9a11-06aac68336e6-lib-modules\") pod \"calico-node-2lt6d\" (UID: \"355e17e3-9ce7-445e-9a11-06aac68336e6\") " pod="calico-system/calico-node-2lt6d" Nov 8 00:35:05.078867 kubelet[3251]: I1108 00:35:05.077524 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/355e17e3-9ce7-445e-9a11-06aac68336e6-node-certs\") pod \"calico-node-2lt6d\" (UID: \"355e17e3-9ce7-445e-9a11-06aac68336e6\") " pod="calico-system/calico-node-2lt6d" Nov 8 00:35:05.078867 kubelet[3251]: I1108 00:35:05.077550 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/355e17e3-9ce7-445e-9a11-06aac68336e6-cni-bin-dir\") pod \"calico-node-2lt6d\" (UID: \"355e17e3-9ce7-445e-9a11-06aac68336e6\") " pod="calico-system/calico-node-2lt6d" Nov 8 00:35:05.078867 kubelet[3251]: I1108 00:35:05.077571 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/355e17e3-9ce7-445e-9a11-06aac68336e6-var-run-calico\") pod \"calico-node-2lt6d\" (UID: \"355e17e3-9ce7-445e-9a11-06aac68336e6\") " pod="calico-system/calico-node-2lt6d" Nov 8 00:35:05.079129 kubelet[3251]: I1108 00:35:05.077617 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/355e17e3-9ce7-445e-9a11-06aac68336e6-policysync\") pod \"calico-node-2lt6d\" (UID: \"355e17e3-9ce7-445e-9a11-06aac68336e6\") " pod="calico-system/calico-node-2lt6d" Nov 8 00:35:05.079129 kubelet[3251]: I1108 00:35:05.078733 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/355e17e3-9ce7-445e-9a11-06aac68336e6-tigera-ca-bundle\") pod \"calico-node-2lt6d\" (UID: \"355e17e3-9ce7-445e-9a11-06aac68336e6\") " pod="calico-system/calico-node-2lt6d" Nov 8 00:35:05.079129 kubelet[3251]: I1108 00:35:05.078805 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf9v2\" (UniqueName: \"kubernetes.io/projected/355e17e3-9ce7-445e-9a11-06aac68336e6-kube-api-access-nf9v2\") pod \"calico-node-2lt6d\" (UID: \"355e17e3-9ce7-445e-9a11-06aac68336e6\") " pod="calico-system/calico-node-2lt6d" Nov 8 00:35:05.079129 kubelet[3251]: I1108 00:35:05.078861 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/355e17e3-9ce7-445e-9a11-06aac68336e6-cni-log-dir\") pod \"calico-node-2lt6d\" (UID: \"355e17e3-9ce7-445e-9a11-06aac68336e6\") " pod="calico-system/calico-node-2lt6d" Nov 8 00:35:05.079129 kubelet[3251]: I1108 00:35:05.078886 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/355e17e3-9ce7-445e-9a11-06aac68336e6-xtables-lock\") pod \"calico-node-2lt6d\" (UID: \"355e17e3-9ce7-445e-9a11-06aac68336e6\") " pod="calico-system/calico-node-2lt6d" Nov 8 00:35:05.181170 kubelet[3251]: E1108 00:35:05.181067 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.181170 kubelet[3251]: W1108 00:35:05.181091 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.183560 kubelet[3251]: E1108 00:35:05.183433 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.183844 kubelet[3251]: E1108 00:35:05.183829 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.184029 kubelet[3251]: W1108 00:35:05.183921 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.184029 kubelet[3251]: E1108 00:35:05.183943 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.184418 kubelet[3251]: E1108 00:35:05.184213 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.184509 kubelet[3251]: W1108 00:35:05.184496 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.187915 kubelet[3251]: E1108 00:35:05.187780 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.188813 kubelet[3251]: E1108 00:35:05.188786 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.188813 kubelet[3251]: W1108 00:35:05.188808 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.188924 kubelet[3251]: E1108 00:35:05.188825 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.189285 kubelet[3251]: E1108 00:35:05.189265 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.189285 kubelet[3251]: W1108 00:35:05.189279 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.189397 kubelet[3251]: E1108 00:35:05.189291 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.189593 kubelet[3251]: E1108 00:35:05.189563 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.189593 kubelet[3251]: W1108 00:35:05.189583 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.189710 kubelet[3251]: E1108 00:35:05.189598 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.190090 kubelet[3251]: E1108 00:35:05.189864 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.190090 kubelet[3251]: W1108 00:35:05.189874 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.190090 kubelet[3251]: E1108 00:35:05.189884 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.190090 kubelet[3251]: E1108 00:35:05.190065 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.190090 kubelet[3251]: W1108 00:35:05.190072 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.190090 kubelet[3251]: E1108 00:35:05.190080 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.190983 kubelet[3251]: E1108 00:35:05.190965 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.190983 kubelet[3251]: W1108 00:35:05.190980 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.191095 kubelet[3251]: E1108 00:35:05.190990 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.193535 kubelet[3251]: E1108 00:35:05.191152 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.193535 kubelet[3251]: W1108 00:35:05.191163 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.193535 kubelet[3251]: E1108 00:35:05.191171 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.193535 kubelet[3251]: E1108 00:35:05.191589 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.193535 kubelet[3251]: W1108 00:35:05.191600 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.193535 kubelet[3251]: E1108 00:35:05.191609 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.193535 kubelet[3251]: E1108 00:35:05.191809 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.193535 kubelet[3251]: W1108 00:35:05.191816 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.193535 kubelet[3251]: E1108 00:35:05.191834 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.193535 kubelet[3251]: E1108 00:35:05.192055 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.193866 kubelet[3251]: W1108 00:35:05.192062 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.193866 kubelet[3251]: E1108 00:35:05.192072 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.193866 kubelet[3251]: E1108 00:35:05.192506 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.193866 kubelet[3251]: W1108 00:35:05.192514 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.193866 kubelet[3251]: E1108 00:35:05.192525 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.193866 kubelet[3251]: E1108 00:35:05.192746 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.193866 kubelet[3251]: W1108 00:35:05.192757 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.193866 kubelet[3251]: E1108 00:35:05.192767 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.193866 kubelet[3251]: E1108 00:35:05.192940 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.193866 kubelet[3251]: W1108 00:35:05.192948 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.194096 kubelet[3251]: E1108 00:35:05.192956 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.194096 kubelet[3251]: E1108 00:35:05.193233 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.194096 kubelet[3251]: W1108 00:35:05.193241 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.194096 kubelet[3251]: E1108 00:35:05.193250 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.194096 kubelet[3251]: E1108 00:35:05.193456 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.194096 kubelet[3251]: W1108 00:35:05.193463 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.194096 kubelet[3251]: E1108 00:35:05.193472 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.194096 kubelet[3251]: E1108 00:35:05.193700 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.194096 kubelet[3251]: W1108 00:35:05.193708 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.194096 kubelet[3251]: E1108 00:35:05.193717 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.194342 kubelet[3251]: E1108 00:35:05.193882 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.194342 kubelet[3251]: W1108 00:35:05.193889 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.194342 kubelet[3251]: E1108 00:35:05.193897 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.194342 kubelet[3251]: E1108 00:35:05.194179 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.194342 kubelet[3251]: W1108 00:35:05.194187 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.194342 kubelet[3251]: E1108 00:35:05.194196 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.194483 kubelet[3251]: E1108 00:35:05.194389 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.194483 kubelet[3251]: W1108 00:35:05.194397 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.194483 kubelet[3251]: E1108 00:35:05.194408 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.196650 kubelet[3251]: E1108 00:35:05.194574 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.196650 kubelet[3251]: W1108 00:35:05.194584 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.196650 kubelet[3251]: E1108 00:35:05.194593 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.196650 kubelet[3251]: E1108 00:35:05.194820 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.196650 kubelet[3251]: W1108 00:35:05.194829 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.196650 kubelet[3251]: E1108 00:35:05.194837 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.196650 kubelet[3251]: E1108 00:35:05.195653 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.196650 kubelet[3251]: W1108 00:35:05.195664 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.196650 kubelet[3251]: E1108 00:35:05.195686 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.196650 kubelet[3251]: E1108 00:35:05.196184 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.200042 kubelet[3251]: W1108 00:35:05.196196 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.200042 kubelet[3251]: E1108 00:35:05.196206 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.212063 kubelet[3251]: E1108 00:35:05.207386 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.212063 kubelet[3251]: W1108 00:35:05.207419 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.212063 kubelet[3251]: E1108 00:35:05.207444 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.212286 containerd[2015]: time="2025-11-08T00:35:05.209951083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f7c9bd69d-5nfs8,Uid:e419dab0-4092-4dc1-b778-c17a7f4c3d76,Namespace:calico-system,Attempt:0,}" Nov 8 00:35:05.267358 containerd[2015]: time="2025-11-08T00:35:05.267162719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:35:05.267613 containerd[2015]: time="2025-11-08T00:35:05.267573808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:35:05.267766 containerd[2015]: time="2025-11-08T00:35:05.267739527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:05.268106 containerd[2015]: time="2025-11-08T00:35:05.268068181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:05.301124 kubelet[3251]: E1108 00:35:05.301081 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xqwhf" podUID="deabdfcd-c211-4fd0-a621-ac2732629dc7" Nov 8 00:35:05.319732 containerd[2015]: time="2025-11-08T00:35:05.319591310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2lt6d,Uid:355e17e3-9ce7-445e-9a11-06aac68336e6,Namespace:calico-system,Attempt:0,}" Nov 8 00:35:05.373755 kubelet[3251]: E1108 00:35:05.373695 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.373914 kubelet[3251]: W1108 00:35:05.373748 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.373914 kubelet[3251]: E1108 00:35:05.373787 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.379645 kubelet[3251]: E1108 00:35:05.376144 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.379645 kubelet[3251]: W1108 00:35:05.376176 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.379645 kubelet[3251]: E1108 00:35:05.376223 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.379645 kubelet[3251]: E1108 00:35:05.376666 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.379645 kubelet[3251]: W1108 00:35:05.376683 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.379645 kubelet[3251]: E1108 00:35:05.376702 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.379645 kubelet[3251]: E1108 00:35:05.377047 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.379645 kubelet[3251]: W1108 00:35:05.377060 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.379645 kubelet[3251]: E1108 00:35:05.377075 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.379645 kubelet[3251]: E1108 00:35:05.377336 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.380251 kubelet[3251]: W1108 00:35:05.377347 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.380251 kubelet[3251]: E1108 00:35:05.377370 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.380251 kubelet[3251]: E1108 00:35:05.377615 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.380251 kubelet[3251]: W1108 00:35:05.377648 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.380251 kubelet[3251]: E1108 00:35:05.377665 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.380251 kubelet[3251]: E1108 00:35:05.377995 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.380251 kubelet[3251]: W1108 00:35:05.378025 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.380251 kubelet[3251]: E1108 00:35:05.378039 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.380251 kubelet[3251]: E1108 00:35:05.378698 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.380251 kubelet[3251]: W1108 00:35:05.378718 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.380687 kubelet[3251]: E1108 00:35:05.378731 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.380687 kubelet[3251]: E1108 00:35:05.379013 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.380687 kubelet[3251]: W1108 00:35:05.379024 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.380687 kubelet[3251]: E1108 00:35:05.379037 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.380687 kubelet[3251]: E1108 00:35:05.379288 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.380687 kubelet[3251]: W1108 00:35:05.379299 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.380687 kubelet[3251]: E1108 00:35:05.379311 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.380687 kubelet[3251]: E1108 00:35:05.379598 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.380687 kubelet[3251]: W1108 00:35:05.379609 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.380687 kubelet[3251]: E1108 00:35:05.379672 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.381116 kubelet[3251]: E1108 00:35:05.379952 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.381116 kubelet[3251]: W1108 00:35:05.379964 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.381116 kubelet[3251]: E1108 00:35:05.379986 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.381116 kubelet[3251]: E1108 00:35:05.380305 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.381116 kubelet[3251]: W1108 00:35:05.380317 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.381116 kubelet[3251]: E1108 00:35:05.380330 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.381116 kubelet[3251]: E1108 00:35:05.380663 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.381116 kubelet[3251]: W1108 00:35:05.380674 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.381116 kubelet[3251]: E1108 00:35:05.380710 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.381116 kubelet[3251]: E1108 00:35:05.381035 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.381525 kubelet[3251]: W1108 00:35:05.381047 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.381525 kubelet[3251]: E1108 00:35:05.381059 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.381525 kubelet[3251]: E1108 00:35:05.381351 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.381525 kubelet[3251]: W1108 00:35:05.381361 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.381525 kubelet[3251]: E1108 00:35:05.381391 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.388883 kubelet[3251]: E1108 00:35:05.381765 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.388883 kubelet[3251]: W1108 00:35:05.381778 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.388883 kubelet[3251]: E1108 00:35:05.381790 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.388883 kubelet[3251]: E1108 00:35:05.382401 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.388883 kubelet[3251]: W1108 00:35:05.382416 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.388883 kubelet[3251]: E1108 00:35:05.382432 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.388883 kubelet[3251]: E1108 00:35:05.382998 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.388883 kubelet[3251]: W1108 00:35:05.383010 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.388883 kubelet[3251]: E1108 00:35:05.383024 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.388883 kubelet[3251]: E1108 00:35:05.383401 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.391675 kubelet[3251]: W1108 00:35:05.383413 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.391675 kubelet[3251]: E1108 00:35:05.383436 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.391675 kubelet[3251]: E1108 00:35:05.383958 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.391675 kubelet[3251]: W1108 00:35:05.383976 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.391675 kubelet[3251]: E1108 00:35:05.383991 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.391675 kubelet[3251]: I1108 00:35:05.384021 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l858l\" (UniqueName: \"kubernetes.io/projected/deabdfcd-c211-4fd0-a621-ac2732629dc7-kube-api-access-l858l\") pod \"csi-node-driver-xqwhf\" (UID: \"deabdfcd-c211-4fd0-a621-ac2732629dc7\") " pod="calico-system/csi-node-driver-xqwhf" Nov 8 00:35:05.391675 kubelet[3251]: E1108 00:35:05.384494 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.391675 kubelet[3251]: W1108 00:35:05.384509 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.391675 kubelet[3251]: E1108 00:35:05.384523 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.392029 kubelet[3251]: I1108 00:35:05.384550 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/deabdfcd-c211-4fd0-a621-ac2732629dc7-varrun\") pod \"csi-node-driver-xqwhf\" (UID: \"deabdfcd-c211-4fd0-a621-ac2732629dc7\") " pod="calico-system/csi-node-driver-xqwhf" Nov 8 00:35:05.392029 kubelet[3251]: E1108 00:35:05.385108 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.392029 kubelet[3251]: W1108 00:35:05.385122 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.392029 kubelet[3251]: E1108 00:35:05.385158 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.392029 kubelet[3251]: I1108 00:35:05.385184 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/deabdfcd-c211-4fd0-a621-ac2732629dc7-kubelet-dir\") pod \"csi-node-driver-xqwhf\" (UID: \"deabdfcd-c211-4fd0-a621-ac2732629dc7\") " pod="calico-system/csi-node-driver-xqwhf" Nov 8 00:35:05.392029 kubelet[3251]: E1108 00:35:05.385687 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.392029 kubelet[3251]: W1108 00:35:05.385749 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.392029 kubelet[3251]: E1108 00:35:05.386205 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.392029 kubelet[3251]: E1108 00:35:05.386455 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.392381 kubelet[3251]: W1108 00:35:05.386472 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.392381 kubelet[3251]: E1108 00:35:05.386521 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.392381 kubelet[3251]: E1108 00:35:05.387124 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.392381 kubelet[3251]: W1108 00:35:05.387136 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.392381 kubelet[3251]: E1108 00:35:05.387153 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.392381 kubelet[3251]: I1108 00:35:05.387178 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/deabdfcd-c211-4fd0-a621-ac2732629dc7-socket-dir\") pod \"csi-node-driver-xqwhf\" (UID: \"deabdfcd-c211-4fd0-a621-ac2732629dc7\") " pod="calico-system/csi-node-driver-xqwhf" Nov 8 00:35:05.392381 kubelet[3251]: E1108 00:35:05.387523 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.392381 kubelet[3251]: W1108 00:35:05.387548 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.392381 kubelet[3251]: E1108 00:35:05.387576 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.398545 kubelet[3251]: E1108 00:35:05.387921 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.398545 kubelet[3251]: W1108 00:35:05.387935 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.398545 kubelet[3251]: E1108 00:35:05.387948 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.398545 kubelet[3251]: E1108 00:35:05.390286 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.398545 kubelet[3251]: W1108 00:35:05.390299 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.398545 kubelet[3251]: E1108 00:35:05.390318 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.398545 kubelet[3251]: I1108 00:35:05.390345 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/deabdfcd-c211-4fd0-a621-ac2732629dc7-registration-dir\") pod \"csi-node-driver-xqwhf\" (UID: \"deabdfcd-c211-4fd0-a621-ac2732629dc7\") " pod="calico-system/csi-node-driver-xqwhf" Nov 8 00:35:05.398545 kubelet[3251]: E1108 00:35:05.390641 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.398545 kubelet[3251]: W1108 00:35:05.390664 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.398918 kubelet[3251]: E1108 00:35:05.390693 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.398918 kubelet[3251]: E1108 00:35:05.390922 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.398918 kubelet[3251]: W1108 00:35:05.390933 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.398918 kubelet[3251]: E1108 00:35:05.390945 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.398918 kubelet[3251]: E1108 00:35:05.391294 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.398918 kubelet[3251]: W1108 00:35:05.391309 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.398918 kubelet[3251]: E1108 00:35:05.391338 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.398918 kubelet[3251]: E1108 00:35:05.391721 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.398918 kubelet[3251]: W1108 00:35:05.392261 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.398918 kubelet[3251]: E1108 00:35:05.392280 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.399290 kubelet[3251]: E1108 00:35:05.392963 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.399290 kubelet[3251]: W1108 00:35:05.394095 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.399290 kubelet[3251]: E1108 00:35:05.394122 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.399290 kubelet[3251]: E1108 00:35:05.394748 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.399290 kubelet[3251]: W1108 00:35:05.394761 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.399290 kubelet[3251]: E1108 00:35:05.394775 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.434509 containerd[2015]: time="2025-11-08T00:35:05.433454345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:35:05.434509 containerd[2015]: time="2025-11-08T00:35:05.433947809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:35:05.434843 containerd[2015]: time="2025-11-08T00:35:05.434797499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:05.435106 containerd[2015]: time="2025-11-08T00:35:05.435070678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:05.493803 kubelet[3251]: E1108 00:35:05.493064 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.493803 kubelet[3251]: W1108 00:35:05.493202 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.493803 kubelet[3251]: E1108 00:35:05.493231 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.495076 kubelet[3251]: E1108 00:35:05.494758 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.495076 kubelet[3251]: W1108 00:35:05.494783 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.495076 kubelet[3251]: E1108 00:35:05.494811 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.498224 kubelet[3251]: E1108 00:35:05.496112 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.498224 kubelet[3251]: W1108 00:35:05.496130 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.498224 kubelet[3251]: E1108 00:35:05.496353 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.498224 kubelet[3251]: E1108 00:35:05.497166 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.498224 kubelet[3251]: W1108 00:35:05.497183 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.498224 kubelet[3251]: E1108 00:35:05.497205 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.498224 kubelet[3251]: E1108 00:35:05.498214 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.498224 kubelet[3251]: W1108 00:35:05.498229 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.501480 kubelet[3251]: E1108 00:35:05.498505 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.501480 kubelet[3251]: E1108 00:35:05.499403 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.501480 kubelet[3251]: W1108 00:35:05.499418 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.501480 kubelet[3251]: E1108 00:35:05.499555 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.501480 kubelet[3251]: E1108 00:35:05.500306 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.501480 kubelet[3251]: W1108 00:35:05.500336 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.501480 kubelet[3251]: E1108 00:35:05.500517 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.501480 kubelet[3251]: E1108 00:35:05.500933 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.501480 kubelet[3251]: W1108 00:35:05.500957 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.501480 kubelet[3251]: E1108 00:35:05.501192 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.502593 kubelet[3251]: E1108 00:35:05.501827 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.502593 kubelet[3251]: W1108 00:35:05.501932 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.502593 kubelet[3251]: E1108 00:35:05.502146 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.505358 kubelet[3251]: E1108 00:35:05.503674 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.505358 kubelet[3251]: W1108 00:35:05.503718 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.505358 kubelet[3251]: E1108 00:35:05.503786 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.505358 kubelet[3251]: E1108 00:35:05.504304 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.505358 kubelet[3251]: W1108 00:35:05.504317 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.505358 kubelet[3251]: E1108 00:35:05.504434 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.505358 kubelet[3251]: E1108 00:35:05.504901 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.505358 kubelet[3251]: W1108 00:35:05.505019 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.506909 kubelet[3251]: E1108 00:35:05.505588 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.506909 kubelet[3251]: E1108 00:35:05.506377 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.506909 kubelet[3251]: W1108 00:35:05.506391 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.506909 kubelet[3251]: E1108 00:35:05.506492 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.510816 kubelet[3251]: E1108 00:35:05.508227 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.510816 kubelet[3251]: W1108 00:35:05.508244 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.510816 kubelet[3251]: E1108 00:35:05.508322 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.510816 kubelet[3251]: E1108 00:35:05.509526 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.510816 kubelet[3251]: W1108 00:35:05.509541 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.510816 kubelet[3251]: E1108 00:35:05.510559 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.511536 kubelet[3251]: E1108 00:35:05.511267 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.511536 kubelet[3251]: W1108 00:35:05.511285 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.511536 kubelet[3251]: E1108 00:35:05.511402 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.512271 kubelet[3251]: E1108 00:35:05.511899 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.512271 kubelet[3251]: W1108 00:35:05.511913 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.512271 kubelet[3251]: E1108 00:35:05.512150 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.513366 kubelet[3251]: E1108 00:35:05.512704 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.513366 kubelet[3251]: W1108 00:35:05.512719 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.514929 kubelet[3251]: E1108 00:35:05.514557 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.514929 kubelet[3251]: E1108 00:35:05.514723 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.514929 kubelet[3251]: W1108 00:35:05.514735 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.514929 kubelet[3251]: E1108 00:35:05.514855 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.516186 kubelet[3251]: E1108 00:35:05.515921 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.516359 kubelet[3251]: W1108 00:35:05.516342 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.516837 kubelet[3251]: E1108 00:35:05.516764 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.518060 kubelet[3251]: E1108 00:35:05.517534 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.518060 kubelet[3251]: W1108 00:35:05.517551 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.518719 kubelet[3251]: E1108 00:35:05.518108 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.519644 kubelet[3251]: E1108 00:35:05.519475 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.519644 kubelet[3251]: W1108 00:35:05.519487 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.520232 kubelet[3251]: E1108 00:35:05.520217 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.520750 kubelet[3251]: E1108 00:35:05.520589 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.520750 kubelet[3251]: W1108 00:35:05.520604 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.521249 kubelet[3251]: E1108 00:35:05.520924 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.522423 kubelet[3251]: E1108 00:35:05.522317 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.522912 kubelet[3251]: W1108 00:35:05.522331 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.522912 kubelet[3251]: E1108 00:35:05.522562 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.523652 kubelet[3251]: E1108 00:35:05.523350 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.523652 kubelet[3251]: W1108 00:35:05.523365 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.523652 kubelet[3251]: E1108 00:35:05.523380 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.589795 kubelet[3251]: E1108 00:35:05.589762 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:05.590099 kubelet[3251]: W1108 00:35:05.590004 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:05.590099 kubelet[3251]: E1108 00:35:05.590041 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:05.620231 containerd[2015]: time="2025-11-08T00:35:05.620166456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f7c9bd69d-5nfs8,Uid:e419dab0-4092-4dc1-b778-c17a7f4c3d76,Namespace:calico-system,Attempt:0,} returns sandbox id \"e20363baa0595cbcc281cd29019052bc1bf2bf229cd8ed4c70617a6e87b11b72\"" Nov 8 00:35:05.681059 containerd[2015]: time="2025-11-08T00:35:05.680925845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2lt6d,Uid:355e17e3-9ce7-445e-9a11-06aac68336e6,Namespace:calico-system,Attempt:0,} returns sandbox id \"10d8504673722b81ff04719ce30dcd668a6c18e6fb329f3c42305b4ed9de2187\"" Nov 8 00:35:05.697821 containerd[2015]: time="2025-11-08T00:35:05.697780815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:35:07.234485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount185725457.mount: Deactivated successfully. Nov 8 00:35:07.442140 kubelet[3251]: E1108 00:35:07.440141 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xqwhf" podUID="deabdfcd-c211-4fd0-a621-ac2732629dc7" Nov 8 00:35:08.450575 containerd[2015]: time="2025-11-08T00:35:08.450525584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:08.453170 containerd[2015]: time="2025-11-08T00:35:08.453124846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 8 00:35:08.457296 containerd[2015]: time="2025-11-08T00:35:08.456991853Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:08.466661 containerd[2015]: time="2025-11-08T00:35:08.466453840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:08.467792 containerd[2015]: time="2025-11-08T00:35:08.467742847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.769892363s" Nov 8 00:35:08.467792 containerd[2015]: time="2025-11-08T00:35:08.467791746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:35:08.469300 containerd[2015]: time="2025-11-08T00:35:08.469263071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:35:08.489230 containerd[2015]: time="2025-11-08T00:35:08.489110306Z" level=info msg="CreateContainer within sandbox \"e20363baa0595cbcc281cd29019052bc1bf2bf229cd8ed4c70617a6e87b11b72\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:35:08.519791 containerd[2015]: time="2025-11-08T00:35:08.519723431Z" level=info msg="CreateContainer within sandbox \"e20363baa0595cbcc281cd29019052bc1bf2bf229cd8ed4c70617a6e87b11b72\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"338288758abda23a9d287cec8d98ec9188f75c4dc562b2f62cac0b9fac290e90\"" Nov 8 00:35:08.521551 containerd[2015]: time="2025-11-08T00:35:08.520469163Z" level=info msg="StartContainer for \"338288758abda23a9d287cec8d98ec9188f75c4dc562b2f62cac0b9fac290e90\"" Nov 8 00:35:08.613689 containerd[2015]: time="2025-11-08T00:35:08.613616544Z" level=info msg="StartContainer for \"338288758abda23a9d287cec8d98ec9188f75c4dc562b2f62cac0b9fac290e90\" returns successfully" Nov 8 00:35:09.395158 kubelet[3251]: E1108 00:35:09.395105 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xqwhf" podUID="deabdfcd-c211-4fd0-a621-ac2732629dc7" Nov 8 00:35:09.478269 systemd[1]: run-containerd-runc-k8s.io-338288758abda23a9d287cec8d98ec9188f75c4dc562b2f62cac0b9fac290e90-runc.421YWv.mount: Deactivated successfully. Nov 8 00:35:09.633432 kubelet[3251]: E1108 00:35:09.633393 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.633432 kubelet[3251]: W1108 00:35:09.633427 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.635759 kubelet[3251]: E1108 00:35:09.635720 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.636451 kubelet[3251]: E1108 00:35:09.636122 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.636451 kubelet[3251]: W1108 00:35:09.636137 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.636451 kubelet[3251]: E1108 00:35:09.636154 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.636572 kubelet[3251]: E1108 00:35:09.636513 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.636572 kubelet[3251]: W1108 00:35:09.636528 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.636572 kubelet[3251]: E1108 00:35:09.636544 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.636834 kubelet[3251]: E1108 00:35:09.636808 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.636834 kubelet[3251]: W1108 00:35:09.636825 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.636834 kubelet[3251]: E1108 00:35:09.636838 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.637680 kubelet[3251]: E1108 00:35:09.637013 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.637680 kubelet[3251]: W1108 00:35:09.637023 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.637680 kubelet[3251]: E1108 00:35:09.637031 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.637680 kubelet[3251]: E1108 00:35:09.637172 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.637680 kubelet[3251]: W1108 00:35:09.637179 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.637680 kubelet[3251]: E1108 00:35:09.637187 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.637680 kubelet[3251]: E1108 00:35:09.637318 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.637680 kubelet[3251]: W1108 00:35:09.637325 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.637680 kubelet[3251]: E1108 00:35:09.637334 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.637680 kubelet[3251]: E1108 00:35:09.637509 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.638008 kubelet[3251]: W1108 00:35:09.637515 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.638008 kubelet[3251]: E1108 00:35:09.637522 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.638008 kubelet[3251]: E1108 00:35:09.637700 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.638008 kubelet[3251]: W1108 00:35:09.637707 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.638008 kubelet[3251]: E1108 00:35:09.637714 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.638008 kubelet[3251]: E1108 00:35:09.637890 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.638008 kubelet[3251]: W1108 00:35:09.637897 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.638008 kubelet[3251]: E1108 00:35:09.637904 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.638236 kubelet[3251]: E1108 00:35:09.638038 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.638236 kubelet[3251]: W1108 00:35:09.638044 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.638236 kubelet[3251]: E1108 00:35:09.638051 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.638236 kubelet[3251]: E1108 00:35:09.638188 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.638236 kubelet[3251]: W1108 00:35:09.638194 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.638236 kubelet[3251]: E1108 00:35:09.638201 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.638394 kubelet[3251]: E1108 00:35:09.638343 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.638394 kubelet[3251]: W1108 00:35:09.638350 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.638394 kubelet[3251]: E1108 00:35:09.638359 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.639237 kubelet[3251]: E1108 00:35:09.638490 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.639237 kubelet[3251]: W1108 00:35:09.638504 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.639237 kubelet[3251]: E1108 00:35:09.638511 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.639237 kubelet[3251]: E1108 00:35:09.638684 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.639237 kubelet[3251]: W1108 00:35:09.638691 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.639237 kubelet[3251]: E1108 00:35:09.638699 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.644167 kubelet[3251]: E1108 00:35:09.643712 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.644167 kubelet[3251]: W1108 00:35:09.643730 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.644167 kubelet[3251]: E1108 00:35:09.643749 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.644167 kubelet[3251]: E1108 00:35:09.643997 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.644167 kubelet[3251]: W1108 00:35:09.644026 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.644167 kubelet[3251]: E1108 00:35:09.644039 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.644514 kubelet[3251]: E1108 00:35:09.644254 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.644514 kubelet[3251]: W1108 00:35:09.644262 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.644514 kubelet[3251]: E1108 00:35:09.644270 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.646748 kubelet[3251]: E1108 00:35:09.646083 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.646748 kubelet[3251]: W1108 00:35:09.646105 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.646748 kubelet[3251]: E1108 00:35:09.646123 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.646748 kubelet[3251]: E1108 00:35:09.646283 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.646748 kubelet[3251]: W1108 00:35:09.646290 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.646748 kubelet[3251]: E1108 00:35:09.646298 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.646748 kubelet[3251]: E1108 00:35:09.646458 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.646748 kubelet[3251]: W1108 00:35:09.646465 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.646748 kubelet[3251]: E1108 00:35:09.646472 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.649187 kubelet[3251]: E1108 00:35:09.648874 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.649187 kubelet[3251]: W1108 00:35:09.648891 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.649187 kubelet[3251]: E1108 00:35:09.648908 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.649187 kubelet[3251]: E1108 00:35:09.649096 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.649187 kubelet[3251]: W1108 00:35:09.649103 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.649187 kubelet[3251]: E1108 00:35:09.649112 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.650796 kubelet[3251]: E1108 00:35:09.650768 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.650796 kubelet[3251]: W1108 00:35:09.650788 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.651756 kubelet[3251]: E1108 00:35:09.651664 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.652072 kubelet[3251]: E1108 00:35:09.651941 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.652072 kubelet[3251]: W1108 00:35:09.651952 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.653798 kubelet[3251]: E1108 00:35:09.653730 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.654125 kubelet[3251]: E1108 00:35:09.654105 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.654125 kubelet[3251]: W1108 00:35:09.654120 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.654219 kubelet[3251]: E1108 00:35:09.654204 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.654502 kubelet[3251]: E1108 00:35:09.654489 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.654542 kubelet[3251]: W1108 00:35:09.654520 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.654650 kubelet[3251]: E1108 00:35:09.654572 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.654727 kubelet[3251]: E1108 00:35:09.654713 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.654727 kubelet[3251]: W1108 00:35:09.654725 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.654807 kubelet[3251]: E1108 00:35:09.654739 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.655678 kubelet[3251]: E1108 00:35:09.655477 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.655678 kubelet[3251]: W1108 00:35:09.655489 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.655678 kubelet[3251]: E1108 00:35:09.655503 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.655834 kubelet[3251]: E1108 00:35:09.655829 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.655860 kubelet[3251]: W1108 00:35:09.655837 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.655860 kubelet[3251]: E1108 00:35:09.655847 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.656072 kubelet[3251]: E1108 00:35:09.656053 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.656072 kubelet[3251]: W1108 00:35:09.656068 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.656170 kubelet[3251]: E1108 00:35:09.656079 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.656316 kubelet[3251]: E1108 00:35:09.656301 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.656316 kubelet[3251]: W1108 00:35:09.656312 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.656374 kubelet[3251]: E1108 00:35:09.656320 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.656855 kubelet[3251]: E1108 00:35:09.656839 3251 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:35:09.656855 kubelet[3251]: W1108 00:35:09.656851 3251 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:35:09.656948 kubelet[3251]: E1108 00:35:09.656860 3251 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:35:09.663790 kubelet[3251]: I1108 00:35:09.663586 3251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f7c9bd69d-5nfs8" podStartSLOduration=2.879494307 podStartE2EDuration="5.662401358s" podCreationTimestamp="2025-11-08 00:35:04 +0000 UTC" firstStartedPulling="2025-11-08 00:35:05.686092849 +0000 UTC m=+23.466278790" lastFinishedPulling="2025-11-08 00:35:08.468999898 +0000 UTC m=+26.249185841" observedRunningTime="2025-11-08 00:35:09.659581418 +0000 UTC m=+27.439767370" watchObservedRunningTime="2025-11-08 00:35:09.662401358 +0000 UTC m=+27.442587348" Nov 8 00:35:10.480912 containerd[2015]: time="2025-11-08T00:35:10.480854819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:10.482693 containerd[2015]: time="2025-11-08T00:35:10.482647732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 8 00:35:10.485669 containerd[2015]: time="2025-11-08T00:35:10.484957008Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:10.488300 containerd[2015]: time="2025-11-08T00:35:10.488230348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:10.489009 containerd[2015]: time="2025-11-08T00:35:10.488972656Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.019667763s" Nov 8 00:35:10.489009 containerd[2015]: time="2025-11-08T00:35:10.489009242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:35:10.492371 containerd[2015]: time="2025-11-08T00:35:10.492320051Z" level=info msg="CreateContainer within sandbox \"10d8504673722b81ff04719ce30dcd668a6c18e6fb329f3c42305b4ed9de2187\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:35:10.517283 containerd[2015]: time="2025-11-08T00:35:10.517232089Z" level=info msg="CreateContainer within sandbox \"10d8504673722b81ff04719ce30dcd668a6c18e6fb329f3c42305b4ed9de2187\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"54af4413fcd2b89c3206c253f830a286b8a11012cf1e6d433e523b4f69bf62eb\"" Nov 8 00:35:10.520461 containerd[2015]: time="2025-11-08T00:35:10.518996896Z" level=info msg="StartContainer for \"54af4413fcd2b89c3206c253f830a286b8a11012cf1e6d433e523b4f69bf62eb\"" Nov 8 00:35:10.621977 containerd[2015]: time="2025-11-08T00:35:10.621440800Z" level=info msg="StartContainer for \"54af4413fcd2b89c3206c253f830a286b8a11012cf1e6d433e523b4f69bf62eb\" returns successfully" Nov 8 00:35:10.650342 kubelet[3251]: I1108 00:35:10.650314 3251 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:35:10.732597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54af4413fcd2b89c3206c253f830a286b8a11012cf1e6d433e523b4f69bf62eb-rootfs.mount: Deactivated successfully. Nov 8 00:35:10.929915 containerd[2015]: time="2025-11-08T00:35:10.916060650Z" level=info msg="shim disconnected" id=54af4413fcd2b89c3206c253f830a286b8a11012cf1e6d433e523b4f69bf62eb namespace=k8s.io Nov 8 00:35:10.929915 containerd[2015]: time="2025-11-08T00:35:10.929908657Z" level=warning msg="cleaning up after shim disconnected" id=54af4413fcd2b89c3206c253f830a286b8a11012cf1e6d433e523b4f69bf62eb namespace=k8s.io Nov 8 00:35:10.930209 containerd[2015]: time="2025-11-08T00:35:10.929933627Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:35:11.395305 kubelet[3251]: E1108 00:35:11.395121 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xqwhf" podUID="deabdfcd-c211-4fd0-a621-ac2732629dc7" Nov 8 00:35:11.660061 containerd[2015]: time="2025-11-08T00:35:11.659788207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:35:12.712722 kubelet[3251]: I1108 00:35:12.712680 3251 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:35:13.394841 kubelet[3251]: E1108 00:35:13.394796 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xqwhf" podUID="deabdfcd-c211-4fd0-a621-ac2732629dc7" Nov 8 00:35:15.394282 kubelet[3251]: E1108 00:35:15.394231 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xqwhf" podUID="deabdfcd-c211-4fd0-a621-ac2732629dc7" Nov 8 00:35:16.234892 containerd[2015]: time="2025-11-08T00:35:16.234820929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:16.236708 containerd[2015]: time="2025-11-08T00:35:16.236657172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:35:16.239911 containerd[2015]: time="2025-11-08T00:35:16.238930525Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:16.242464 containerd[2015]: time="2025-11-08T00:35:16.242428960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:16.242981 containerd[2015]: time="2025-11-08T00:35:16.242947769Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.58312277s" Nov 8 00:35:16.243085 containerd[2015]: time="2025-11-08T00:35:16.242983036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:35:16.246526 containerd[2015]: time="2025-11-08T00:35:16.246486509Z" level=info msg="CreateContainer within sandbox \"10d8504673722b81ff04719ce30dcd668a6c18e6fb329f3c42305b4ed9de2187\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:35:16.273736 containerd[2015]: time="2025-11-08T00:35:16.273688723Z" level=info msg="CreateContainer within sandbox \"10d8504673722b81ff04719ce30dcd668a6c18e6fb329f3c42305b4ed9de2187\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"291f8fca1c834a52c19921621a967eb97381e152b2131d94edc4b77a5492676f\"" Nov 8 00:35:16.275776 containerd[2015]: time="2025-11-08T00:35:16.274244548Z" level=info msg="StartContainer for \"291f8fca1c834a52c19921621a967eb97381e152b2131d94edc4b77a5492676f\"" Nov 8 00:35:16.356726 containerd[2015]: time="2025-11-08T00:35:16.355566388Z" level=info msg="StartContainer for \"291f8fca1c834a52c19921621a967eb97381e152b2131d94edc4b77a5492676f\" returns successfully" Nov 8 00:35:17.395407 kubelet[3251]: E1108 00:35:17.395355 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xqwhf" podUID="deabdfcd-c211-4fd0-a621-ac2732629dc7" Nov 8 00:35:17.451483 kubelet[3251]: I1108 00:35:17.451450 3251 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:35:17.456457 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-291f8fca1c834a52c19921621a967eb97381e152b2131d94edc4b77a5492676f-rootfs.mount: Deactivated successfully. Nov 8 00:35:17.465107 containerd[2015]: time="2025-11-08T00:35:17.464919055Z" level=info msg="shim disconnected" id=291f8fca1c834a52c19921621a967eb97381e152b2131d94edc4b77a5492676f namespace=k8s.io Nov 8 00:35:17.465107 containerd[2015]: time="2025-11-08T00:35:17.464999417Z" level=warning msg="cleaning up after shim disconnected" id=291f8fca1c834a52c19921621a967eb97381e152b2131d94edc4b77a5492676f namespace=k8s.io Nov 8 00:35:17.465107 containerd[2015]: time="2025-11-08T00:35:17.465011471Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:35:17.627276 kubelet[3251]: I1108 00:35:17.626786 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-945lb\" (UniqueName: \"kubernetes.io/projected/a26fe127-76bb-40fc-84b3-832fbd258b3a-kube-api-access-945lb\") pod \"whisker-56499b9b97-nd2wd\" (UID: \"a26fe127-76bb-40fc-84b3-832fbd258b3a\") " pod="calico-system/whisker-56499b9b97-nd2wd" Nov 8 00:35:17.627276 kubelet[3251]: I1108 00:35:17.626849 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa70c9d5-021d-43b0-b46a-b204a72a1b25-config-volume\") pod \"coredns-668d6bf9bc-wspq5\" (UID: \"fa70c9d5-021d-43b0-b46a-b204a72a1b25\") " pod="kube-system/coredns-668d6bf9bc-wspq5" Nov 8 00:35:17.627276 kubelet[3251]: I1108 00:35:17.626879 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95719bb6-d015-4d14-97fc-c6a4da2f553e-tigera-ca-bundle\") pod \"calico-kube-controllers-6d6dbcdb77-gwjrt\" (UID: \"95719bb6-d015-4d14-97fc-c6a4da2f553e\") " pod="calico-system/calico-kube-controllers-6d6dbcdb77-gwjrt" Nov 8 00:35:17.627276 kubelet[3251]: I1108 00:35:17.626903 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw65l\" (UniqueName: \"kubernetes.io/projected/fa70c9d5-021d-43b0-b46a-b204a72a1b25-kube-api-access-jw65l\") pod \"coredns-668d6bf9bc-wspq5\" (UID: \"fa70c9d5-021d-43b0-b46a-b204a72a1b25\") " pod="kube-system/coredns-668d6bf9bc-wspq5" Nov 8 00:35:17.627276 kubelet[3251]: I1108 00:35:17.626928 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f8a56697-875b-4b53-b7cd-550689e931a7-calico-apiserver-certs\") pod \"calico-apiserver-66479b5f68-gw447\" (UID: \"f8a56697-875b-4b53-b7cd-550689e931a7\") " pod="calico-apiserver/calico-apiserver-66479b5f68-gw447" Nov 8 00:35:17.627883 kubelet[3251]: I1108 00:35:17.626952 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a8220629-4d1c-4d1f-829d-7ab1eb924825-calico-apiserver-certs\") pod \"calico-apiserver-76d787d64-qzb5q\" (UID: \"a8220629-4d1c-4d1f-829d-7ab1eb924825\") " pod="calico-apiserver/calico-apiserver-76d787d64-qzb5q" Nov 8 00:35:17.627883 kubelet[3251]: I1108 00:35:17.626981 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/941e9e80-4862-4e00-88e0-89c895eac1a2-goldmane-ca-bundle\") pod \"goldmane-666569f655-kblsp\" (UID: \"941e9e80-4862-4e00-88e0-89c895eac1a2\") " pod="calico-system/goldmane-666569f655-kblsp" Nov 8 00:35:17.627883 kubelet[3251]: I1108 00:35:17.627005 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcgbw\" (UniqueName: \"kubernetes.io/projected/941e9e80-4862-4e00-88e0-89c895eac1a2-kube-api-access-dcgbw\") pod \"goldmane-666569f655-kblsp\" (UID: \"941e9e80-4862-4e00-88e0-89c895eac1a2\") " pod="calico-system/goldmane-666569f655-kblsp" Nov 8 00:35:17.627883 kubelet[3251]: I1108 00:35:17.627029 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941e9e80-4862-4e00-88e0-89c895eac1a2-config\") pod \"goldmane-666569f655-kblsp\" (UID: \"941e9e80-4862-4e00-88e0-89c895eac1a2\") " pod="calico-system/goldmane-666569f655-kblsp" Nov 8 00:35:17.627883 kubelet[3251]: I1108 00:35:17.627056 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3efb6459-6c10-4cbc-8627-eed63b51acf1-calico-apiserver-certs\") pod \"calico-apiserver-76d787d64-blwxn\" (UID: \"3efb6459-6c10-4cbc-8627-eed63b51acf1\") " pod="calico-apiserver/calico-apiserver-76d787d64-blwxn" Nov 8 00:35:17.628755 kubelet[3251]: I1108 00:35:17.627107 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77d5a3a3-e22d-4eb1-b462-92484d58d55c-config-volume\") pod \"coredns-668d6bf9bc-tmmhp\" (UID: \"77d5a3a3-e22d-4eb1-b462-92484d58d55c\") " pod="kube-system/coredns-668d6bf9bc-tmmhp" Nov 8 00:35:17.628755 kubelet[3251]: I1108 00:35:17.627136 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a26fe127-76bb-40fc-84b3-832fbd258b3a-whisker-ca-bundle\") pod \"whisker-56499b9b97-nd2wd\" (UID: \"a26fe127-76bb-40fc-84b3-832fbd258b3a\") " pod="calico-system/whisker-56499b9b97-nd2wd" Nov 8 00:35:17.628755 kubelet[3251]: I1108 00:35:17.627192 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8v6w\" (UniqueName: \"kubernetes.io/projected/3efb6459-6c10-4cbc-8627-eed63b51acf1-kube-api-access-t8v6w\") pod \"calico-apiserver-76d787d64-blwxn\" (UID: \"3efb6459-6c10-4cbc-8627-eed63b51acf1\") " pod="calico-apiserver/calico-apiserver-76d787d64-blwxn" Nov 8 00:35:17.628755 kubelet[3251]: I1108 00:35:17.627223 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lvqd\" (UniqueName: \"kubernetes.io/projected/f8a56697-875b-4b53-b7cd-550689e931a7-kube-api-access-6lvqd\") pod \"calico-apiserver-66479b5f68-gw447\" (UID: \"f8a56697-875b-4b53-b7cd-550689e931a7\") " pod="calico-apiserver/calico-apiserver-66479b5f68-gw447" Nov 8 00:35:17.628755 kubelet[3251]: I1108 00:35:17.627256 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nz66\" (UniqueName: \"kubernetes.io/projected/95719bb6-d015-4d14-97fc-c6a4da2f553e-kube-api-access-7nz66\") pod \"calico-kube-controllers-6d6dbcdb77-gwjrt\" (UID: \"95719bb6-d015-4d14-97fc-c6a4da2f553e\") " pod="calico-system/calico-kube-controllers-6d6dbcdb77-gwjrt" Nov 8 00:35:17.629124 kubelet[3251]: I1108 00:35:17.627283 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a26fe127-76bb-40fc-84b3-832fbd258b3a-whisker-backend-key-pair\") pod \"whisker-56499b9b97-nd2wd\" (UID: \"a26fe127-76bb-40fc-84b3-832fbd258b3a\") " pod="calico-system/whisker-56499b9b97-nd2wd" Nov 8 00:35:17.629124 kubelet[3251]: I1108 00:35:17.627309 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/941e9e80-4862-4e00-88e0-89c895eac1a2-goldmane-key-pair\") pod \"goldmane-666569f655-kblsp\" (UID: \"941e9e80-4862-4e00-88e0-89c895eac1a2\") " pod="calico-system/goldmane-666569f655-kblsp" Nov 8 00:35:17.629124 kubelet[3251]: I1108 00:35:17.627348 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f48m\" (UniqueName: \"kubernetes.io/projected/77d5a3a3-e22d-4eb1-b462-92484d58d55c-kube-api-access-6f48m\") pod \"coredns-668d6bf9bc-tmmhp\" (UID: \"77d5a3a3-e22d-4eb1-b462-92484d58d55c\") " pod="kube-system/coredns-668d6bf9bc-tmmhp" Nov 8 00:35:17.629124 kubelet[3251]: I1108 00:35:17.627375 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8h6c\" (UniqueName: \"kubernetes.io/projected/a8220629-4d1c-4d1f-829d-7ab1eb924825-kube-api-access-h8h6c\") pod \"calico-apiserver-76d787d64-qzb5q\" (UID: \"a8220629-4d1c-4d1f-829d-7ab1eb924825\") " pod="calico-apiserver/calico-apiserver-76d787d64-qzb5q" Nov 8 00:35:17.693317 containerd[2015]: time="2025-11-08T00:35:17.692761935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:35:17.931469 containerd[2015]: time="2025-11-08T00:35:17.929881459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66479b5f68-gw447,Uid:f8a56697-875b-4b53-b7cd-550689e931a7,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:35:17.931821 containerd[2015]: time="2025-11-08T00:35:17.931781999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kblsp,Uid:941e9e80-4862-4e00-88e0-89c895eac1a2,Namespace:calico-system,Attempt:0,}" Nov 8 00:35:17.934858 containerd[2015]: time="2025-11-08T00:35:17.934810567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tmmhp,Uid:77d5a3a3-e22d-4eb1-b462-92484d58d55c,Namespace:kube-system,Attempt:0,}" Nov 8 00:35:17.936142 containerd[2015]: time="2025-11-08T00:35:17.936093778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56499b9b97-nd2wd,Uid:a26fe127-76bb-40fc-84b3-832fbd258b3a,Namespace:calico-system,Attempt:0,}" Nov 8 00:35:17.941328 containerd[2015]: time="2025-11-08T00:35:17.941071317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d787d64-qzb5q,Uid:a8220629-4d1c-4d1f-829d-7ab1eb924825,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:35:17.941328 containerd[2015]: time="2025-11-08T00:35:17.941088709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d6dbcdb77-gwjrt,Uid:95719bb6-d015-4d14-97fc-c6a4da2f553e,Namespace:calico-system,Attempt:0,}" Nov 8 00:35:17.943662 containerd[2015]: time="2025-11-08T00:35:17.943320301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d787d64-blwxn,Uid:3efb6459-6c10-4cbc-8627-eed63b51acf1,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:35:17.946392 containerd[2015]: time="2025-11-08T00:35:17.946293741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wspq5,Uid:fa70c9d5-021d-43b0-b46a-b204a72a1b25,Namespace:kube-system,Attempt:0,}" Nov 8 00:35:18.370525 containerd[2015]: time="2025-11-08T00:35:18.370378210Z" level=error msg="Failed to destroy network for sandbox \"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.370965 containerd[2015]: time="2025-11-08T00:35:18.370937703Z" level=error msg="Failed to destroy network for sandbox \"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.374176 containerd[2015]: time="2025-11-08T00:35:18.374072507Z" level=error msg="Failed to destroy network for sandbox \"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.374270 containerd[2015]: time="2025-11-08T00:35:18.374118294Z" level=error msg="encountered an error cleaning up failed sandbox \"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.374670 containerd[2015]: time="2025-11-08T00:35:18.374578757Z" level=error msg="encountered an error cleaning up failed sandbox \"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.384418 containerd[2015]: time="2025-11-08T00:35:18.384354926Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d6dbcdb77-gwjrt,Uid:95719bb6-d015-4d14-97fc-c6a4da2f553e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.389509 containerd[2015]: time="2025-11-08T00:35:18.389443803Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66479b5f68-gw447,Uid:f8a56697-875b-4b53-b7cd-550689e931a7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.400190 containerd[2015]: time="2025-11-08T00:35:18.374122747Z" level=error msg="encountered an error cleaning up failed sandbox \"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.400759 containerd[2015]: time="2025-11-08T00:35:18.400612619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d787d64-blwxn,Uid:3efb6459-6c10-4cbc-8627-eed63b51acf1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.414156 kubelet[3251]: E1108 00:35:18.414090 3251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.417755 kubelet[3251]: E1108 00:35:18.402040 3251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.421801 kubelet[3251]: E1108 00:35:18.414799 3251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66479b5f68-gw447" Nov 8 00:35:18.422055 kubelet[3251]: E1108 00:35:18.422014 3251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66479b5f68-gw447" Nov 8 00:35:18.422336 kubelet[3251]: E1108 00:35:18.422280 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66479b5f68-gw447_calico-apiserver(f8a56697-875b-4b53-b7cd-550689e931a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66479b5f68-gw447_calico-apiserver(f8a56697-875b-4b53-b7cd-550689e931a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66479b5f68-gw447" podUID="f8a56697-875b-4b53-b7cd-550689e931a7" Nov 8 00:35:18.423252 kubelet[3251]: E1108 00:35:18.402133 3251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.423447 kubelet[3251]: E1108 00:35:18.423421 3251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d6dbcdb77-gwjrt" Nov 8 00:35:18.423653 kubelet[3251]: E1108 00:35:18.423511 3251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d6dbcdb77-gwjrt" Nov 8 00:35:18.423828 kubelet[3251]: E1108 00:35:18.415611 3251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76d787d64-blwxn" Nov 8 00:35:18.423828 kubelet[3251]: E1108 00:35:18.423699 3251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76d787d64-blwxn" Nov 8 00:35:18.423828 kubelet[3251]: E1108 00:35:18.423741 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76d787d64-blwxn_calico-apiserver(3efb6459-6c10-4cbc-8627-eed63b51acf1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76d787d64-blwxn_calico-apiserver(3efb6459-6c10-4cbc-8627-eed63b51acf1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76d787d64-blwxn" podUID="3efb6459-6c10-4cbc-8627-eed63b51acf1" Nov 8 00:35:18.424336 kubelet[3251]: E1108 00:35:18.423741 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d6dbcdb77-gwjrt_calico-system(95719bb6-d015-4d14-97fc-c6a4da2f553e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d6dbcdb77-gwjrt_calico-system(95719bb6-d015-4d14-97fc-c6a4da2f553e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d6dbcdb77-gwjrt" podUID="95719bb6-d015-4d14-97fc-c6a4da2f553e" Nov 8 00:35:18.438661 containerd[2015]: time="2025-11-08T00:35:18.438208604Z" level=error msg="Failed to destroy network for sandbox \"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.438661 containerd[2015]: time="2025-11-08T00:35:18.438586270Z" level=error msg="encountered an error cleaning up failed sandbox \"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.438848 containerd[2015]: time="2025-11-08T00:35:18.438671964Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56499b9b97-nd2wd,Uid:a26fe127-76bb-40fc-84b3-832fbd258b3a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.439496 kubelet[3251]: E1108 00:35:18.438982 3251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.439496 kubelet[3251]: E1108 00:35:18.439057 3251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56499b9b97-nd2wd" Nov 8 00:35:18.439496 kubelet[3251]: E1108 00:35:18.439091 3251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56499b9b97-nd2wd" Nov 8 00:35:18.439721 containerd[2015]: time="2025-11-08T00:35:18.439376356Z" level=error msg="Failed to destroy network for sandbox \"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.439781 kubelet[3251]: E1108 00:35:18.439143 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-56499b9b97-nd2wd_calico-system(a26fe127-76bb-40fc-84b3-832fbd258b3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-56499b9b97-nd2wd_calico-system(a26fe127-76bb-40fc-84b3-832fbd258b3a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56499b9b97-nd2wd" podUID="a26fe127-76bb-40fc-84b3-832fbd258b3a" Nov 8 00:35:18.440744 containerd[2015]: time="2025-11-08T00:35:18.440700771Z" level=error msg="encountered an error cleaning up failed sandbox \"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.440895 containerd[2015]: time="2025-11-08T00:35:18.440864953Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d787d64-qzb5q,Uid:a8220629-4d1c-4d1f-829d-7ab1eb924825,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.441373 kubelet[3251]: E1108 00:35:18.441160 3251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.441373 kubelet[3251]: E1108 00:35:18.441213 3251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76d787d64-qzb5q" Nov 8 00:35:18.441373 kubelet[3251]: E1108 00:35:18.441237 3251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76d787d64-qzb5q" Nov 8 00:35:18.441552 kubelet[3251]: E1108 00:35:18.441285 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76d787d64-qzb5q_calico-apiserver(a8220629-4d1c-4d1f-829d-7ab1eb924825)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76d787d64-qzb5q_calico-apiserver(a8220629-4d1c-4d1f-829d-7ab1eb924825)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76d787d64-qzb5q" podUID="a8220629-4d1c-4d1f-829d-7ab1eb924825" Nov 8 00:35:18.445390 containerd[2015]: time="2025-11-08T00:35:18.444915985Z" level=error msg="Failed to destroy network for sandbox \"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.445800 containerd[2015]: time="2025-11-08T00:35:18.445733528Z" level=error msg="encountered an error cleaning up failed sandbox \"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.465793 containerd[2015]: time="2025-11-08T00:35:18.453049643Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wspq5,Uid:fa70c9d5-021d-43b0-b46a-b204a72a1b25,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.469234 kubelet[3251]: E1108 00:35:18.462771 3251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.469234 kubelet[3251]: E1108 00:35:18.462833 3251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wspq5" Nov 8 00:35:18.469234 kubelet[3251]: E1108 00:35:18.462857 3251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wspq5" Nov 8 00:35:18.469413 kubelet[3251]: E1108 00:35:18.462905 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wspq5_kube-system(fa70c9d5-021d-43b0-b46a-b204a72a1b25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wspq5_kube-system(fa70c9d5-021d-43b0-b46a-b204a72a1b25)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wspq5" podUID="fa70c9d5-021d-43b0-b46a-b204a72a1b25" Nov 8 00:35:18.471656 containerd[2015]: time="2025-11-08T00:35:18.469556323Z" level=error msg="Failed to destroy network for sandbox \"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.471656 containerd[2015]: time="2025-11-08T00:35:18.469950098Z" level=error msg="encountered an error cleaning up failed sandbox \"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.471656 containerd[2015]: time="2025-11-08T00:35:18.470009095Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tmmhp,Uid:77d5a3a3-e22d-4eb1-b462-92484d58d55c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.473844 kubelet[3251]: E1108 00:35:18.473788 3251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.475108 kubelet[3251]: E1108 00:35:18.474050 3251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tmmhp" Nov 8 00:35:18.475108 kubelet[3251]: E1108 00:35:18.474085 3251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tmmhp" Nov 8 00:35:18.475108 kubelet[3251]: E1108 00:35:18.474146 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tmmhp_kube-system(77d5a3a3-e22d-4eb1-b462-92484d58d55c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tmmhp_kube-system(77d5a3a3-e22d-4eb1-b462-92484d58d55c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tmmhp" podUID="77d5a3a3-e22d-4eb1-b462-92484d58d55c" Nov 8 00:35:18.486298 containerd[2015]: time="2025-11-08T00:35:18.486110717Z" level=error msg="Failed to destroy network for sandbox \"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.488654 containerd[2015]: time="2025-11-08T00:35:18.486692492Z" level=error msg="encountered an error cleaning up failed sandbox \"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.488654 containerd[2015]: time="2025-11-08T00:35:18.486763829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kblsp,Uid:941e9e80-4862-4e00-88e0-89c895eac1a2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.490265 kubelet[3251]: E1108 00:35:18.488859 3251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.490265 kubelet[3251]: E1108 00:35:18.488928 3251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-kblsp" Nov 8 00:35:18.490265 kubelet[3251]: E1108 00:35:18.488958 3251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-kblsp" Nov 8 00:35:18.490429 kubelet[3251]: E1108 00:35:18.489007 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-kblsp_calico-system(941e9e80-4862-4e00-88e0-89c895eac1a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-kblsp_calico-system(941e9e80-4862-4e00-88e0-89c895eac1a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-kblsp" podUID="941e9e80-4862-4e00-88e0-89c895eac1a2" Nov 8 00:35:18.492422 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8-shm.mount: Deactivated successfully. Nov 8 00:35:18.500954 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda-shm.mount: Deactivated successfully. Nov 8 00:35:18.696211 kubelet[3251]: I1108 00:35:18.696084 3251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" Nov 8 00:35:18.701262 kubelet[3251]: I1108 00:35:18.700836 3251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" Nov 8 00:35:18.717411 kubelet[3251]: I1108 00:35:18.716809 3251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" Nov 8 00:35:18.718535 kubelet[3251]: I1108 00:35:18.718516 3251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" Nov 8 00:35:18.721394 kubelet[3251]: I1108 00:35:18.721354 3251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" Nov 8 00:35:18.725758 kubelet[3251]: I1108 00:35:18.725395 3251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" Nov 8 00:35:18.726876 kubelet[3251]: I1108 00:35:18.726850 3251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" Nov 8 00:35:18.731409 kubelet[3251]: I1108 00:35:18.731376 3251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" Nov 8 00:35:18.775160 containerd[2015]: time="2025-11-08T00:35:18.774753251Z" level=info msg="StopPodSandbox for \"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8\"" Nov 8 00:35:18.776670 containerd[2015]: time="2025-11-08T00:35:18.775831686Z" level=info msg="StopPodSandbox for \"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd\"" Nov 8 00:35:18.776887 containerd[2015]: time="2025-11-08T00:35:18.776850050Z" level=info msg="Ensure that sandbox c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8 in task-service has been cleanup successfully" Nov 8 00:35:18.778269 containerd[2015]: time="2025-11-08T00:35:18.777184404Z" level=info msg="Ensure that sandbox 7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd in task-service has been cleanup successfully" Nov 8 00:35:18.778373 containerd[2015]: time="2025-11-08T00:35:18.778284718Z" level=info msg="StopPodSandbox for \"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9\"" Nov 8 00:35:18.778492 containerd[2015]: time="2025-11-08T00:35:18.778463316Z" level=info msg="Ensure that sandbox 8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9 in task-service has been cleanup successfully" Nov 8 00:35:18.780207 containerd[2015]: time="2025-11-08T00:35:18.779759823Z" level=info msg="StopPodSandbox for \"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e\"" Nov 8 00:35:18.780207 containerd[2015]: time="2025-11-08T00:35:18.779956692Z" level=info msg="Ensure that sandbox daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e in task-service has been cleanup successfully" Nov 8 00:35:18.780207 containerd[2015]: time="2025-11-08T00:35:18.780030949Z" level=info msg="StopPodSandbox for \"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415\"" Nov 8 00:35:18.786265 containerd[2015]: time="2025-11-08T00:35:18.786216973Z" level=info msg="Ensure that sandbox 20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415 in task-service has been cleanup successfully" Nov 8 00:35:18.786850 containerd[2015]: time="2025-11-08T00:35:18.786436165Z" level=info msg="StopPodSandbox for \"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75\"" Nov 8 00:35:18.786850 containerd[2015]: time="2025-11-08T00:35:18.786667941Z" level=info msg="Ensure that sandbox 39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75 in task-service has been cleanup successfully" Nov 8 00:35:18.787887 containerd[2015]: time="2025-11-08T00:35:18.786237970Z" level=info msg="StopPodSandbox for \"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda\"" Nov 8 00:35:18.788084 containerd[2015]: time="2025-11-08T00:35:18.788059816Z" level=info msg="Ensure that sandbox 575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda in task-service has been cleanup successfully" Nov 8 00:35:18.792279 containerd[2015]: time="2025-11-08T00:35:18.791846126Z" level=info msg="StopPodSandbox for \"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce\"" Nov 8 00:35:18.792279 containerd[2015]: time="2025-11-08T00:35:18.792037550Z" level=info msg="Ensure that sandbox c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce in task-service has been cleanup successfully" Nov 8 00:35:18.922267 containerd[2015]: time="2025-11-08T00:35:18.922211679Z" level=error msg="StopPodSandbox for \"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd\" failed" error="failed to destroy network for sandbox \"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.927899 kubelet[3251]: E1108 00:35:18.927723 3251 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" Nov 8 00:35:18.928222 containerd[2015]: time="2025-11-08T00:35:18.928165269Z" level=error msg="StopPodSandbox for \"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda\" failed" error="failed to destroy network for sandbox \"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.928647 kubelet[3251]: E1108 00:35:18.928598 3251 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" Nov 8 00:35:18.949575 containerd[2015]: time="2025-11-08T00:35:18.948540451Z" level=error msg="StopPodSandbox for \"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8\" failed" error="failed to destroy network for sandbox \"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.949807 kubelet[3251]: E1108 00:35:18.927822 3251 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd"} Nov 8 00:35:18.950108 kubelet[3251]: E1108 00:35:18.950084 3251 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a26fe127-76bb-40fc-84b3-832fbd258b3a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:35:18.950272 kubelet[3251]: E1108 00:35:18.929688 3251 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda"} Nov 8 00:35:18.950332 kubelet[3251]: E1108 00:35:18.950306 3251 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"941e9e80-4862-4e00-88e0-89c895eac1a2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:35:18.950408 kubelet[3251]: E1108 00:35:18.950342 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"941e9e80-4862-4e00-88e0-89c895eac1a2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-kblsp" podUID="941e9e80-4862-4e00-88e0-89c895eac1a2" Nov 8 00:35:18.950524 kubelet[3251]: E1108 00:35:18.950500 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a26fe127-76bb-40fc-84b3-832fbd258b3a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56499b9b97-nd2wd" podUID="a26fe127-76bb-40fc-84b3-832fbd258b3a" Nov 8 00:35:18.950617 kubelet[3251]: E1108 00:35:18.949998 3251 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" Nov 8 00:35:18.950759 kubelet[3251]: E1108 00:35:18.950741 3251 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8"} Nov 8 00:35:18.950851 kubelet[3251]: E1108 00:35:18.950836 3251 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"77d5a3a3-e22d-4eb1-b462-92484d58d55c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:35:18.951045 kubelet[3251]: E1108 00:35:18.950975 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"77d5a3a3-e22d-4eb1-b462-92484d58d55c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tmmhp" podUID="77d5a3a3-e22d-4eb1-b462-92484d58d55c" Nov 8 00:35:18.969179 containerd[2015]: time="2025-11-08T00:35:18.969126536Z" level=error msg="StopPodSandbox for \"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce\" failed" error="failed to destroy network for sandbox \"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.969638 kubelet[3251]: E1108 00:35:18.969352 3251 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" Nov 8 00:35:18.969638 kubelet[3251]: E1108 00:35:18.969405 3251 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce"} Nov 8 00:35:18.969638 kubelet[3251]: E1108 00:35:18.969457 3251 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fa70c9d5-021d-43b0-b46a-b204a72a1b25\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:35:18.969638 kubelet[3251]: E1108 00:35:18.969501 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fa70c9d5-021d-43b0-b46a-b204a72a1b25\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wspq5" podUID="fa70c9d5-021d-43b0-b46a-b204a72a1b25" Nov 8 00:35:18.974693 containerd[2015]: time="2025-11-08T00:35:18.974615955Z" level=error msg="StopPodSandbox for \"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75\" failed" error="failed to destroy network for sandbox \"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.974912 kubelet[3251]: E1108 00:35:18.974876 3251 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" Nov 8 00:35:18.974992 kubelet[3251]: E1108 00:35:18.974931 3251 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75"} Nov 8 00:35:18.974992 kubelet[3251]: E1108 00:35:18.974975 3251 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f8a56697-875b-4b53-b7cd-550689e931a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:35:18.975191 kubelet[3251]: E1108 00:35:18.975006 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f8a56697-875b-4b53-b7cd-550689e931a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66479b5f68-gw447" podUID="f8a56697-875b-4b53-b7cd-550689e931a7" Nov 8 00:35:18.979051 containerd[2015]: time="2025-11-08T00:35:18.978995395Z" level=error msg="StopPodSandbox for \"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e\" failed" error="failed to destroy network for sandbox \"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.979307 kubelet[3251]: E1108 00:35:18.979248 3251 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" Nov 8 00:35:18.979381 kubelet[3251]: E1108 00:35:18.979316 3251 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e"} Nov 8 00:35:18.979381 kubelet[3251]: E1108 00:35:18.979363 3251 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"95719bb6-d015-4d14-97fc-c6a4da2f553e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:35:18.979608 kubelet[3251]: E1108 00:35:18.979395 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"95719bb6-d015-4d14-97fc-c6a4da2f553e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d6dbcdb77-gwjrt" podUID="95719bb6-d015-4d14-97fc-c6a4da2f553e" Nov 8 00:35:18.981437 containerd[2015]: time="2025-11-08T00:35:18.981395287Z" level=error msg="StopPodSandbox for \"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9\" failed" error="failed to destroy network for sandbox \"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.981789 kubelet[3251]: E1108 00:35:18.981747 3251 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" Nov 8 00:35:18.981889 kubelet[3251]: E1108 00:35:18.981797 3251 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9"} Nov 8 00:35:18.981889 kubelet[3251]: E1108 00:35:18.981840 3251 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3efb6459-6c10-4cbc-8627-eed63b51acf1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:35:18.981889 kubelet[3251]: E1108 00:35:18.981870 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3efb6459-6c10-4cbc-8627-eed63b51acf1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76d787d64-blwxn" podUID="3efb6459-6c10-4cbc-8627-eed63b51acf1" Nov 8 00:35:18.984024 containerd[2015]: time="2025-11-08T00:35:18.983981505Z" level=error msg="StopPodSandbox for \"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415\" failed" error="failed to destroy network for sandbox \"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:18.984275 kubelet[3251]: E1108 00:35:18.984242 3251 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" Nov 8 00:35:18.984364 kubelet[3251]: E1108 00:35:18.984287 3251 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415"} Nov 8 00:35:18.984364 kubelet[3251]: E1108 00:35:18.984331 3251 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a8220629-4d1c-4d1f-829d-7ab1eb924825\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:35:18.984458 kubelet[3251]: E1108 00:35:18.984360 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a8220629-4d1c-4d1f-829d-7ab1eb924825\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76d787d64-qzb5q" podUID="a8220629-4d1c-4d1f-829d-7ab1eb924825" Nov 8 00:35:19.397593 containerd[2015]: time="2025-11-08T00:35:19.397477452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xqwhf,Uid:deabdfcd-c211-4fd0-a621-ac2732629dc7,Namespace:calico-system,Attempt:0,}" Nov 8 00:35:19.478927 containerd[2015]: time="2025-11-08T00:35:19.478873875Z" level=error msg="Failed to destroy network for sandbox \"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:19.483380 containerd[2015]: time="2025-11-08T00:35:19.480756244Z" level=error msg="encountered an error cleaning up failed sandbox \"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:19.483380 containerd[2015]: time="2025-11-08T00:35:19.480822947Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xqwhf,Uid:deabdfcd-c211-4fd0-a621-ac2732629dc7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:19.482806 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9-shm.mount: Deactivated successfully. Nov 8 00:35:19.483795 kubelet[3251]: E1108 00:35:19.481025 3251 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:19.483795 kubelet[3251]: E1108 00:35:19.481073 3251 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xqwhf" Nov 8 00:35:19.483795 kubelet[3251]: E1108 00:35:19.481107 3251 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xqwhf" Nov 8 00:35:19.484108 kubelet[3251]: E1108 00:35:19.481151 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xqwhf_calico-system(deabdfcd-c211-4fd0-a621-ac2732629dc7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xqwhf_calico-system(deabdfcd-c211-4fd0-a621-ac2732629dc7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xqwhf" podUID="deabdfcd-c211-4fd0-a621-ac2732629dc7" Nov 8 00:35:19.737698 kubelet[3251]: I1108 00:35:19.737557 3251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" Nov 8 00:35:19.740107 containerd[2015]: time="2025-11-08T00:35:19.738564412Z" level=info msg="StopPodSandbox for \"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9\"" Nov 8 00:35:19.740556 containerd[2015]: time="2025-11-08T00:35:19.740525773Z" level=info msg="Ensure that sandbox b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9 in task-service has been cleanup successfully" Nov 8 00:35:19.780847 containerd[2015]: time="2025-11-08T00:35:19.780734504Z" level=error msg="StopPodSandbox for \"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9\" failed" error="failed to destroy network for sandbox \"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:35:19.781450 kubelet[3251]: E1108 00:35:19.781073 3251 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" Nov 8 00:35:19.781450 kubelet[3251]: E1108 00:35:19.781152 3251 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9"} Nov 8 00:35:19.781450 kubelet[3251]: E1108 00:35:19.781217 3251 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"deabdfcd-c211-4fd0-a621-ac2732629dc7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:35:19.781450 kubelet[3251]: E1108 00:35:19.781251 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"deabdfcd-c211-4fd0-a621-ac2732629dc7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xqwhf" podUID="deabdfcd-c211-4fd0-a621-ac2732629dc7" Nov 8 00:35:21.410066 systemd-journald[1503]: Under memory pressure, flushing caches. Nov 8 00:35:21.408030 systemd-resolved[1916]: Under memory pressure, flushing caches. Nov 8 00:35:21.408091 systemd-resolved[1916]: Flushed all caches. Nov 8 00:35:23.455709 systemd-resolved[1916]: Under memory pressure, flushing caches. Nov 8 00:35:23.458146 systemd-journald[1503]: Under memory pressure, flushing caches. Nov 8 00:35:23.455746 systemd-resolved[1916]: Flushed all caches. Nov 8 00:35:25.659012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2297461040.mount: Deactivated successfully. Nov 8 00:35:25.715963 containerd[2015]: time="2025-11-08T00:35:25.715819994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:35:25.743480 containerd[2015]: time="2025-11-08T00:35:25.743425340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.042741838s" Nov 8 00:35:25.745526 containerd[2015]: time="2025-11-08T00:35:25.745372833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:35:25.745526 containerd[2015]: time="2025-11-08T00:35:25.745408331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:25.799092 containerd[2015]: time="2025-11-08T00:35:25.798981340Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:25.800937 containerd[2015]: time="2025-11-08T00:35:25.800183607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:35:25.833654 containerd[2015]: time="2025-11-08T00:35:25.833588976Z" level=info msg="CreateContainer within sandbox \"10d8504673722b81ff04719ce30dcd668a6c18e6fb329f3c42305b4ed9de2187\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:35:25.917454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount231335643.mount: Deactivated successfully. Nov 8 00:35:25.937691 containerd[2015]: time="2025-11-08T00:35:25.937485550Z" level=info msg="CreateContainer within sandbox \"10d8504673722b81ff04719ce30dcd668a6c18e6fb329f3c42305b4ed9de2187\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a01e3fcbe531af7393296e5a1becfba241784057456686698813316ed822d1c6\"" Nov 8 00:35:25.951190 containerd[2015]: time="2025-11-08T00:35:25.951084599Z" level=info msg="StartContainer for \"a01e3fcbe531af7393296e5a1becfba241784057456686698813316ed822d1c6\"" Nov 8 00:35:26.162912 containerd[2015]: time="2025-11-08T00:35:26.162872959Z" level=info msg="StartContainer for \"a01e3fcbe531af7393296e5a1becfba241784057456686698813316ed822d1c6\" returns successfully" Nov 8 00:35:26.292178 systemd[1]: Started sshd@7-172.31.19.248:22-139.178.89.65:38814.service - OpenSSH per-connection server daemon (139.178.89.65:38814). Nov 8 00:35:26.370094 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:35:26.370229 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:35:26.506729 sshd[4671]: Accepted publickey for core from 139.178.89.65 port 38814 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:35:26.506635 sshd[4671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:35:26.531747 systemd-logind[1990]: New session 8 of user core. Nov 8 00:35:26.536213 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:35:26.659883 containerd[2015]: time="2025-11-08T00:35:26.659754564Z" level=info msg="StopPodSandbox for \"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd\"" Nov 8 00:35:26.862460 kubelet[3251]: I1108 00:35:26.849336 3251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2lt6d" podStartSLOduration=2.726223536 podStartE2EDuration="22.830109015s" podCreationTimestamp="2025-11-08 00:35:04 +0000 UTC" firstStartedPulling="2025-11-08 00:35:05.6972606 +0000 UTC m=+23.477446544" lastFinishedPulling="2025-11-08 00:35:25.80114609 +0000 UTC m=+43.581332023" observedRunningTime="2025-11-08 00:35:26.824230312 +0000 UTC m=+44.604416282" watchObservedRunningTime="2025-11-08 00:35:26.830109015 +0000 UTC m=+44.610294969" Nov 8 00:35:27.428848 systemd-journald[1503]: Under memory pressure, flushing caches. Nov 8 00:35:27.423703 systemd-resolved[1916]: Under memory pressure, flushing caches. Nov 8 00:35:27.423736 systemd-resolved[1916]: Flushed all caches. Nov 8 00:35:27.524312 sshd[4671]: pam_unix(sshd:session): session closed for user core Nov 8 00:35:27.530111 systemd-logind[1990]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:35:27.531391 systemd[1]: sshd@7-172.31.19.248:22-139.178.89.65:38814.service: Deactivated successfully. Nov 8 00:35:27.538703 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:35:27.539741 systemd-logind[1990]: Removed session 8. Nov 8 00:35:27.550909 containerd[2015]: 2025-11-08 00:35:26.939 [INFO][4701] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" Nov 8 00:35:27.550909 containerd[2015]: 2025-11-08 00:35:26.940 [INFO][4701] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" iface="eth0" netns="/var/run/netns/cni-13efd7d6-d887-8587-61cd-ef82666a0270" Nov 8 00:35:27.550909 containerd[2015]: 2025-11-08 00:35:26.942 [INFO][4701] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" iface="eth0" netns="/var/run/netns/cni-13efd7d6-d887-8587-61cd-ef82666a0270" Nov 8 00:35:27.550909 containerd[2015]: 2025-11-08 00:35:26.943 [INFO][4701] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" iface="eth0" netns="/var/run/netns/cni-13efd7d6-d887-8587-61cd-ef82666a0270" Nov 8 00:35:27.550909 containerd[2015]: 2025-11-08 00:35:26.944 [INFO][4701] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" Nov 8 00:35:27.550909 containerd[2015]: 2025-11-08 00:35:26.944 [INFO][4701] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" Nov 8 00:35:27.550909 containerd[2015]: 2025-11-08 00:35:27.515 [INFO][4711] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" HandleID="k8s-pod-network.7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" Workload="ip--172--31--19--248-k8s-whisker--56499b9b97--nd2wd-eth0" Nov 8 00:35:27.550909 containerd[2015]: 2025-11-08 00:35:27.521 [INFO][4711] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:27.550909 containerd[2015]: 2025-11-08 00:35:27.521 [INFO][4711] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:27.550909 containerd[2015]: 2025-11-08 00:35:27.543 [WARNING][4711] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" HandleID="k8s-pod-network.7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" Workload="ip--172--31--19--248-k8s-whisker--56499b9b97--nd2wd-eth0" Nov 8 00:35:27.550909 containerd[2015]: 2025-11-08 00:35:27.543 [INFO][4711] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" HandleID="k8s-pod-network.7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" Workload="ip--172--31--19--248-k8s-whisker--56499b9b97--nd2wd-eth0" Nov 8 00:35:27.550909 containerd[2015]: 2025-11-08 00:35:27.545 [INFO][4711] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:27.550909 containerd[2015]: 2025-11-08 00:35:27.547 [INFO][4701] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" Nov 8 00:35:27.554520 containerd[2015]: time="2025-11-08T00:35:27.552735424Z" level=info msg="TearDown network for sandbox \"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd\" successfully" Nov 8 00:35:27.554520 containerd[2015]: time="2025-11-08T00:35:27.552779783Z" level=info msg="StopPodSandbox for \"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd\" returns successfully" Nov 8 00:35:27.557070 systemd[1]: run-netns-cni\x2d13efd7d6\x2dd887\x2d8587\x2d61cd\x2def82666a0270.mount: Deactivated successfully. Nov 8 00:35:27.655212 kubelet[3251]: I1108 00:35:27.655146 3251 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a26fe127-76bb-40fc-84b3-832fbd258b3a-whisker-ca-bundle\") pod \"a26fe127-76bb-40fc-84b3-832fbd258b3a\" (UID: \"a26fe127-76bb-40fc-84b3-832fbd258b3a\") " Nov 8 00:35:27.655212 kubelet[3251]: I1108 00:35:27.655219 3251 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a26fe127-76bb-40fc-84b3-832fbd258b3a-whisker-backend-key-pair\") pod \"a26fe127-76bb-40fc-84b3-832fbd258b3a\" (UID: \"a26fe127-76bb-40fc-84b3-832fbd258b3a\") " Nov 8 00:35:27.655428 kubelet[3251]: I1108 00:35:27.655249 3251 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-945lb\" (UniqueName: \"kubernetes.io/projected/a26fe127-76bb-40fc-84b3-832fbd258b3a-kube-api-access-945lb\") pod \"a26fe127-76bb-40fc-84b3-832fbd258b3a\" (UID: \"a26fe127-76bb-40fc-84b3-832fbd258b3a\") " Nov 8 00:35:27.661804 kubelet[3251]: I1108 00:35:27.661062 3251 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a26fe127-76bb-40fc-84b3-832fbd258b3a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a26fe127-76bb-40fc-84b3-832fbd258b3a" (UID: "a26fe127-76bb-40fc-84b3-832fbd258b3a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:35:27.661804 kubelet[3251]: I1108 00:35:27.660220 3251 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a26fe127-76bb-40fc-84b3-832fbd258b3a-kube-api-access-945lb" (OuterVolumeSpecName: "kube-api-access-945lb") pod "a26fe127-76bb-40fc-84b3-832fbd258b3a" (UID: "a26fe127-76bb-40fc-84b3-832fbd258b3a"). InnerVolumeSpecName "kube-api-access-945lb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:35:27.662398 systemd[1]: var-lib-kubelet-pods-a26fe127\x2d76bb\x2d40fc\x2d84b3\x2d832fbd258b3a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d945lb.mount: Deactivated successfully. Nov 8 00:35:27.676863 kubelet[3251]: I1108 00:35:27.676789 3251 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a26fe127-76bb-40fc-84b3-832fbd258b3a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a26fe127-76bb-40fc-84b3-832fbd258b3a" (UID: "a26fe127-76bb-40fc-84b3-832fbd258b3a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:35:27.677684 systemd[1]: var-lib-kubelet-pods-a26fe127\x2d76bb\x2d40fc\x2d84b3\x2d832fbd258b3a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:35:27.756433 kubelet[3251]: I1108 00:35:27.756024 3251 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a26fe127-76bb-40fc-84b3-832fbd258b3a-whisker-backend-key-pair\") on node \"ip-172-31-19-248\" DevicePath \"\"" Nov 8 00:35:27.756433 kubelet[3251]: I1108 00:35:27.756061 3251 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-945lb\" (UniqueName: \"kubernetes.io/projected/a26fe127-76bb-40fc-84b3-832fbd258b3a-kube-api-access-945lb\") on node \"ip-172-31-19-248\" DevicePath \"\"" Nov 8 00:35:27.756433 kubelet[3251]: I1108 00:35:27.756073 3251 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a26fe127-76bb-40fc-84b3-832fbd258b3a-whisker-ca-bundle\") on node \"ip-172-31-19-248\" DevicePath \"\"" Nov 8 00:35:28.058779 kubelet[3251]: I1108 00:35:28.058573 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d2ca12f7-a78a-4b5d-ab7c-23346dac65ff-whisker-backend-key-pair\") pod \"whisker-6765766d45-dksmq\" (UID: \"d2ca12f7-a78a-4b5d-ab7c-23346dac65ff\") " pod="calico-system/whisker-6765766d45-dksmq" Nov 8 00:35:28.058779 kubelet[3251]: I1108 00:35:28.058621 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2ca12f7-a78a-4b5d-ab7c-23346dac65ff-whisker-ca-bundle\") pod \"whisker-6765766d45-dksmq\" (UID: \"d2ca12f7-a78a-4b5d-ab7c-23346dac65ff\") " pod="calico-system/whisker-6765766d45-dksmq" Nov 8 00:35:28.058779 kubelet[3251]: I1108 00:35:28.058671 3251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqcns\" (UniqueName: \"kubernetes.io/projected/d2ca12f7-a78a-4b5d-ab7c-23346dac65ff-kube-api-access-bqcns\") pod \"whisker-6765766d45-dksmq\" (UID: \"d2ca12f7-a78a-4b5d-ab7c-23346dac65ff\") " pod="calico-system/whisker-6765766d45-dksmq" Nov 8 00:35:28.219139 containerd[2015]: time="2025-11-08T00:35:28.219075826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6765766d45-dksmq,Uid:d2ca12f7-a78a-4b5d-ab7c-23346dac65ff,Namespace:calico-system,Attempt:0,}" Nov 8 00:35:28.405663 (udev-worker)[4800]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:35:28.412694 systemd-networkd[1573]: cali1024b45d84a: Link UP Nov 8 00:35:28.415382 systemd-networkd[1573]: cali1024b45d84a: Gained carrier Nov 8 00:35:28.422551 kubelet[3251]: I1108 00:35:28.421169 3251 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a26fe127-76bb-40fc-84b3-832fbd258b3a" path="/var/lib/kubelet/pods/a26fe127-76bb-40fc-84b3-832fbd258b3a/volumes" Nov 8 00:35:28.447139 containerd[2015]: 2025-11-08 00:35:28.290 [INFO][4781] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:35:28.447139 containerd[2015]: 2025-11-08 00:35:28.300 [INFO][4781] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--248-k8s-whisker--6765766d45--dksmq-eth0 whisker-6765766d45- calico-system d2ca12f7-a78a-4b5d-ab7c-23346dac65ff 990 0 2025-11-08 00:35:27 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6765766d45 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-19-248 whisker-6765766d45-dksmq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1024b45d84a [] [] }} ContainerID="74ad2413c388c28955f17d1841ca56498708de7685922b754fa02dba1bda9efa" Namespace="calico-system" Pod="whisker-6765766d45-dksmq" WorkloadEndpoint="ip--172--31--19--248-k8s-whisker--6765766d45--dksmq-" Nov 8 00:35:28.447139 containerd[2015]: 2025-11-08 00:35:28.300 [INFO][4781] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="74ad2413c388c28955f17d1841ca56498708de7685922b754fa02dba1bda9efa" Namespace="calico-system" Pod="whisker-6765766d45-dksmq" WorkloadEndpoint="ip--172--31--19--248-k8s-whisker--6765766d45--dksmq-eth0" Nov 8 00:35:28.447139 containerd[2015]: 2025-11-08 00:35:28.330 [INFO][4793] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74ad2413c388c28955f17d1841ca56498708de7685922b754fa02dba1bda9efa" HandleID="k8s-pod-network.74ad2413c388c28955f17d1841ca56498708de7685922b754fa02dba1bda9efa" Workload="ip--172--31--19--248-k8s-whisker--6765766d45--dksmq-eth0" Nov 8 00:35:28.447139 containerd[2015]: 2025-11-08 00:35:28.331 [INFO][4793] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="74ad2413c388c28955f17d1841ca56498708de7685922b754fa02dba1bda9efa" HandleID="k8s-pod-network.74ad2413c388c28955f17d1841ca56498708de7685922b754fa02dba1bda9efa" Workload="ip--172--31--19--248-k8s-whisker--6765766d45--dksmq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f270), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-248", "pod":"whisker-6765766d45-dksmq", "timestamp":"2025-11-08 00:35:28.330312632 +0000 UTC"}, Hostname:"ip-172-31-19-248", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:35:28.447139 containerd[2015]: 2025-11-08 00:35:28.332 [INFO][4793] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:28.447139 containerd[2015]: 2025-11-08 00:35:28.332 [INFO][4793] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:28.447139 containerd[2015]: 2025-11-08 00:35:28.332 [INFO][4793] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-248' Nov 8 00:35:28.447139 containerd[2015]: 2025-11-08 00:35:28.343 [INFO][4793] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.74ad2413c388c28955f17d1841ca56498708de7685922b754fa02dba1bda9efa" host="ip-172-31-19-248" Nov 8 00:35:28.447139 containerd[2015]: 2025-11-08 00:35:28.356 [INFO][4793] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-248" Nov 8 00:35:28.447139 containerd[2015]: 2025-11-08 00:35:28.361 [INFO][4793] ipam/ipam.go 511: Trying affinity for 192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:28.447139 containerd[2015]: 2025-11-08 00:35:28.363 [INFO][4793] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:28.447139 containerd[2015]: 2025-11-08 00:35:28.365 [INFO][4793] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:28.447139 containerd[2015]: 2025-11-08 00:35:28.365 [INFO][4793] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.64/26 handle="k8s-pod-network.74ad2413c388c28955f17d1841ca56498708de7685922b754fa02dba1bda9efa" host="ip-172-31-19-248" Nov 8 00:35:28.447139 containerd[2015]: 2025-11-08 00:35:28.369 [INFO][4793] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.74ad2413c388c28955f17d1841ca56498708de7685922b754fa02dba1bda9efa Nov 8 00:35:28.447139 containerd[2015]: 2025-11-08 00:35:28.374 [INFO][4793] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.64/26 handle="k8s-pod-network.74ad2413c388c28955f17d1841ca56498708de7685922b754fa02dba1bda9efa" host="ip-172-31-19-248" Nov 8 00:35:28.447139 containerd[2015]: 2025-11-08 00:35:28.383 [INFO][4793] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.65/26] block=192.168.7.64/26 handle="k8s-pod-network.74ad2413c388c28955f17d1841ca56498708de7685922b754fa02dba1bda9efa" host="ip-172-31-19-248" Nov 8 00:35:28.447139 containerd[2015]: 2025-11-08 00:35:28.383 [INFO][4793] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.65/26] handle="k8s-pod-network.74ad2413c388c28955f17d1841ca56498708de7685922b754fa02dba1bda9efa" host="ip-172-31-19-248" Nov 8 00:35:28.447139 containerd[2015]: 2025-11-08 00:35:28.383 [INFO][4793] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:28.447139 containerd[2015]: 2025-11-08 00:35:28.383 [INFO][4793] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.65/26] IPv6=[] ContainerID="74ad2413c388c28955f17d1841ca56498708de7685922b754fa02dba1bda9efa" HandleID="k8s-pod-network.74ad2413c388c28955f17d1841ca56498708de7685922b754fa02dba1bda9efa" Workload="ip--172--31--19--248-k8s-whisker--6765766d45--dksmq-eth0" Nov 8 00:35:28.452765 containerd[2015]: 2025-11-08 00:35:28.387 [INFO][4781] cni-plugin/k8s.go 418: Populated endpoint ContainerID="74ad2413c388c28955f17d1841ca56498708de7685922b754fa02dba1bda9efa" Namespace="calico-system" Pod="whisker-6765766d45-dksmq" WorkloadEndpoint="ip--172--31--19--248-k8s-whisker--6765766d45--dksmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-whisker--6765766d45--dksmq-eth0", GenerateName:"whisker-6765766d45-", Namespace:"calico-system", SelfLink:"", UID:"d2ca12f7-a78a-4b5d-ab7c-23346dac65ff", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6765766d45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"", Pod:"whisker-6765766d45-dksmq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.7.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1024b45d84a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:28.452765 containerd[2015]: 2025-11-08 00:35:28.387 [INFO][4781] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.65/32] ContainerID="74ad2413c388c28955f17d1841ca56498708de7685922b754fa02dba1bda9efa" Namespace="calico-system" Pod="whisker-6765766d45-dksmq" WorkloadEndpoint="ip--172--31--19--248-k8s-whisker--6765766d45--dksmq-eth0" Nov 8 00:35:28.452765 containerd[2015]: 2025-11-08 00:35:28.387 [INFO][4781] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1024b45d84a ContainerID="74ad2413c388c28955f17d1841ca56498708de7685922b754fa02dba1bda9efa" Namespace="calico-system" Pod="whisker-6765766d45-dksmq" WorkloadEndpoint="ip--172--31--19--248-k8s-whisker--6765766d45--dksmq-eth0" Nov 8 00:35:28.452765 containerd[2015]: 2025-11-08 00:35:28.416 [INFO][4781] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74ad2413c388c28955f17d1841ca56498708de7685922b754fa02dba1bda9efa" Namespace="calico-system" Pod="whisker-6765766d45-dksmq" WorkloadEndpoint="ip--172--31--19--248-k8s-whisker--6765766d45--dksmq-eth0" Nov 8 00:35:28.452765 containerd[2015]: 2025-11-08 00:35:28.418 [INFO][4781] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="74ad2413c388c28955f17d1841ca56498708de7685922b754fa02dba1bda9efa" Namespace="calico-system" Pod="whisker-6765766d45-dksmq" WorkloadEndpoint="ip--172--31--19--248-k8s-whisker--6765766d45--dksmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-whisker--6765766d45--dksmq-eth0", GenerateName:"whisker-6765766d45-", Namespace:"calico-system", SelfLink:"", UID:"d2ca12f7-a78a-4b5d-ab7c-23346dac65ff", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6765766d45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"74ad2413c388c28955f17d1841ca56498708de7685922b754fa02dba1bda9efa", Pod:"whisker-6765766d45-dksmq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.7.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1024b45d84a", MAC:"f6:a1:59:ee:45:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:28.452765 containerd[2015]: 2025-11-08 00:35:28.436 [INFO][4781] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="74ad2413c388c28955f17d1841ca56498708de7685922b754fa02dba1bda9efa" Namespace="calico-system" Pod="whisker-6765766d45-dksmq" WorkloadEndpoint="ip--172--31--19--248-k8s-whisker--6765766d45--dksmq-eth0" Nov 8 00:35:28.525657 containerd[2015]: time="2025-11-08T00:35:28.522807113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:35:28.525657 containerd[2015]: time="2025-11-08T00:35:28.522867694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:35:28.525657 containerd[2015]: time="2025-11-08T00:35:28.522883481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:28.526377 containerd[2015]: time="2025-11-08T00:35:28.526270136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:28.755346 containerd[2015]: time="2025-11-08T00:35:28.755291187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6765766d45-dksmq,Uid:d2ca12f7-a78a-4b5d-ab7c-23346dac65ff,Namespace:calico-system,Attempt:0,} returns sandbox id \"74ad2413c388c28955f17d1841ca56498708de7685922b754fa02dba1bda9efa\"" Nov 8 00:35:28.768057 containerd[2015]: time="2025-11-08T00:35:28.768013842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:35:29.070014 containerd[2015]: time="2025-11-08T00:35:29.069779429Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:29.084735 containerd[2015]: time="2025-11-08T00:35:29.071852364Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:35:29.084735 containerd[2015]: time="2025-11-08T00:35:29.072102688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:35:29.085133 kubelet[3251]: E1108 00:35:29.084612 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:35:29.086353 kubelet[3251]: E1108 00:35:29.085593 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:35:29.116225 kubelet[3251]: E1108 00:35:29.116137 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4404d727bb9a4ffdaf41a02b37a33d06,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bqcns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6765766d45-dksmq_calico-system(d2ca12f7-a78a-4b5d-ab7c-23346dac65ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:29.119579 containerd[2015]: time="2025-11-08T00:35:29.119357141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:35:29.189649 kernel: bpftool[4990]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:35:29.398126 containerd[2015]: time="2025-11-08T00:35:29.396813471Z" level=info msg="StopPodSandbox for \"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8\"" Nov 8 00:35:29.453941 containerd[2015]: time="2025-11-08T00:35:29.453754821Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:29.457658 containerd[2015]: time="2025-11-08T00:35:29.455877404Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:35:29.457658 containerd[2015]: time="2025-11-08T00:35:29.455967004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:35:29.457860 kubelet[3251]: E1108 00:35:29.456145 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:35:29.457860 kubelet[3251]: E1108 00:35:29.456198 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:35:29.457973 kubelet[3251]: E1108 00:35:29.456350 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bqcns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6765766d45-dksmq_calico-system(d2ca12f7-a78a-4b5d-ab7c-23346dac65ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:29.458530 kubelet[3251]: E1108 00:35:29.458464 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6765766d45-dksmq" podUID="d2ca12f7-a78a-4b5d-ab7c-23346dac65ff" Nov 8 00:35:29.472729 systemd-journald[1503]: Under memory pressure, flushing caches. Nov 8 00:35:29.472064 systemd-resolved[1916]: Under memory pressure, flushing caches. Nov 8 00:35:29.472085 systemd-resolved[1916]: Flushed all caches. Nov 8 00:35:29.519289 containerd[2015]: 2025-11-08 00:35:29.475 [INFO][5000] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" Nov 8 00:35:29.519289 containerd[2015]: 2025-11-08 00:35:29.475 [INFO][5000] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" iface="eth0" netns="/var/run/netns/cni-1e40d96c-abaf-0210-23a4-cee25af94fc2" Nov 8 00:35:29.519289 containerd[2015]: 2025-11-08 00:35:29.476 [INFO][5000] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" iface="eth0" netns="/var/run/netns/cni-1e40d96c-abaf-0210-23a4-cee25af94fc2" Nov 8 00:35:29.519289 containerd[2015]: 2025-11-08 00:35:29.476 [INFO][5000] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" iface="eth0" netns="/var/run/netns/cni-1e40d96c-abaf-0210-23a4-cee25af94fc2" Nov 8 00:35:29.519289 containerd[2015]: 2025-11-08 00:35:29.476 [INFO][5000] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" Nov 8 00:35:29.519289 containerd[2015]: 2025-11-08 00:35:29.476 [INFO][5000] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" Nov 8 00:35:29.519289 containerd[2015]: 2025-11-08 00:35:29.500 [INFO][5009] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" HandleID="k8s-pod-network.c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" Workload="ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-eth0" Nov 8 00:35:29.519289 containerd[2015]: 2025-11-08 00:35:29.501 [INFO][5009] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:29.519289 containerd[2015]: 2025-11-08 00:35:29.501 [INFO][5009] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:29.519289 containerd[2015]: 2025-11-08 00:35:29.508 [WARNING][5009] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" HandleID="k8s-pod-network.c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" Workload="ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-eth0" Nov 8 00:35:29.519289 containerd[2015]: 2025-11-08 00:35:29.508 [INFO][5009] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" HandleID="k8s-pod-network.c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" Workload="ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-eth0" Nov 8 00:35:29.519289 containerd[2015]: 2025-11-08 00:35:29.511 [INFO][5009] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:29.519289 containerd[2015]: 2025-11-08 00:35:29.515 [INFO][5000] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" Nov 8 00:35:29.519970 containerd[2015]: time="2025-11-08T00:35:29.519430083Z" level=info msg="TearDown network for sandbox \"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8\" successfully" Nov 8 00:35:29.522375 containerd[2015]: time="2025-11-08T00:35:29.519465966Z" level=info msg="StopPodSandbox for \"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8\" returns successfully" Nov 8 00:35:29.522375 containerd[2015]: time="2025-11-08T00:35:29.521933717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tmmhp,Uid:77d5a3a3-e22d-4eb1-b462-92484d58d55c,Namespace:kube-system,Attempt:1,}" Nov 8 00:35:29.528795 systemd[1]: run-netns-cni\x2d1e40d96c\x2dabaf\x2d0210\x2d23a4\x2dcee25af94fc2.mount: Deactivated successfully. Nov 8 00:35:29.607846 systemd-networkd[1573]: cali1024b45d84a: Gained IPv6LL Nov 8 00:35:29.721321 (udev-worker)[4672]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:35:29.735227 systemd-networkd[1573]: vxlan.calico: Link UP Nov 8 00:35:29.735236 systemd-networkd[1573]: vxlan.calico: Gained carrier Nov 8 00:35:29.791405 systemd-networkd[1573]: calia53c6904941: Link UP Nov 8 00:35:29.792992 systemd-networkd[1573]: calia53c6904941: Gained carrier Nov 8 00:35:29.835195 containerd[2015]: 2025-11-08 00:35:29.634 [INFO][5028] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-eth0 coredns-668d6bf9bc- kube-system 77d5a3a3-e22d-4eb1-b462-92484d58d55c 1007 0 2025-11-08 00:34:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-248 coredns-668d6bf9bc-tmmhp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia53c6904941 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54" Namespace="kube-system" Pod="coredns-668d6bf9bc-tmmhp" WorkloadEndpoint="ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-" Nov 8 00:35:29.835195 containerd[2015]: 2025-11-08 00:35:29.635 [INFO][5028] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54" Namespace="kube-system" Pod="coredns-668d6bf9bc-tmmhp" WorkloadEndpoint="ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-eth0" Nov 8 00:35:29.835195 containerd[2015]: 2025-11-08 00:35:29.691 [INFO][5043] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54" HandleID="k8s-pod-network.e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54" Workload="ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-eth0" Nov 8 00:35:29.835195 containerd[2015]: 2025-11-08 00:35:29.691 [INFO][5043] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54" HandleID="k8s-pod-network.e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54" Workload="ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-248", "pod":"coredns-668d6bf9bc-tmmhp", "timestamp":"2025-11-08 00:35:29.691352002 +0000 UTC"}, Hostname:"ip-172-31-19-248", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:35:29.835195 containerd[2015]: 2025-11-08 00:35:29.691 [INFO][5043] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:29.835195 containerd[2015]: 2025-11-08 00:35:29.691 [INFO][5043] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:29.835195 containerd[2015]: 2025-11-08 00:35:29.692 [INFO][5043] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-248' Nov 8 00:35:29.835195 containerd[2015]: 2025-11-08 00:35:29.705 [INFO][5043] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54" host="ip-172-31-19-248" Nov 8 00:35:29.835195 containerd[2015]: 2025-11-08 00:35:29.716 [INFO][5043] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-248" Nov 8 00:35:29.835195 containerd[2015]: 2025-11-08 00:35:29.729 [INFO][5043] ipam/ipam.go 511: Trying affinity for 192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:29.835195 containerd[2015]: 2025-11-08 00:35:29.733 [INFO][5043] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:29.835195 containerd[2015]: 2025-11-08 00:35:29.740 [INFO][5043] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:29.835195 containerd[2015]: 2025-11-08 00:35:29.740 [INFO][5043] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.64/26 handle="k8s-pod-network.e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54" host="ip-172-31-19-248" Nov 8 00:35:29.835195 containerd[2015]: 2025-11-08 00:35:29.745 [INFO][5043] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54 Nov 8 00:35:29.835195 containerd[2015]: 2025-11-08 00:35:29.763 [INFO][5043] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.64/26 handle="k8s-pod-network.e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54" host="ip-172-31-19-248" Nov 8 00:35:29.835195 containerd[2015]: 2025-11-08 00:35:29.776 [INFO][5043] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.66/26] block=192.168.7.64/26 handle="k8s-pod-network.e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54" host="ip-172-31-19-248" Nov 8 00:35:29.835195 containerd[2015]: 2025-11-08 00:35:29.776 [INFO][5043] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.66/26] handle="k8s-pod-network.e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54" host="ip-172-31-19-248" Nov 8 00:35:29.835195 containerd[2015]: 2025-11-08 00:35:29.776 [INFO][5043] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:29.835195 containerd[2015]: 2025-11-08 00:35:29.776 [INFO][5043] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.66/26] IPv6=[] ContainerID="e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54" HandleID="k8s-pod-network.e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54" Workload="ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-eth0" Nov 8 00:35:29.841348 containerd[2015]: 2025-11-08 00:35:29.785 [INFO][5028] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54" Namespace="kube-system" Pod="coredns-668d6bf9bc-tmmhp" WorkloadEndpoint="ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"77d5a3a3-e22d-4eb1-b462-92484d58d55c", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 34, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"", Pod:"coredns-668d6bf9bc-tmmhp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia53c6904941", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:29.841348 containerd[2015]: 2025-11-08 00:35:29.786 [INFO][5028] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.66/32] ContainerID="e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54" Namespace="kube-system" Pod="coredns-668d6bf9bc-tmmhp" WorkloadEndpoint="ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-eth0" Nov 8 00:35:29.841348 containerd[2015]: 2025-11-08 00:35:29.786 [INFO][5028] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia53c6904941 ContainerID="e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54" Namespace="kube-system" Pod="coredns-668d6bf9bc-tmmhp" WorkloadEndpoint="ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-eth0" Nov 8 00:35:29.841348 containerd[2015]: 2025-11-08 00:35:29.790 [INFO][5028] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54" Namespace="kube-system" Pod="coredns-668d6bf9bc-tmmhp" WorkloadEndpoint="ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-eth0" Nov 8 00:35:29.841348 containerd[2015]: 2025-11-08 00:35:29.791 [INFO][5028] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54" Namespace="kube-system" Pod="coredns-668d6bf9bc-tmmhp" WorkloadEndpoint="ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"77d5a3a3-e22d-4eb1-b462-92484d58d55c", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 34, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54", Pod:"coredns-668d6bf9bc-tmmhp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia53c6904941", MAC:"12:17:eb:3f:8e:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:29.841348 containerd[2015]: 2025-11-08 00:35:29.816 [INFO][5028] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54" Namespace="kube-system" Pod="coredns-668d6bf9bc-tmmhp" WorkloadEndpoint="ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-eth0" Nov 8 00:35:29.841885 kubelet[3251]: E1108 00:35:29.835677 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6765766d45-dksmq" podUID="d2ca12f7-a78a-4b5d-ab7c-23346dac65ff" Nov 8 00:35:29.920540 containerd[2015]: time="2025-11-08T00:35:29.919596771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:35:29.920540 containerd[2015]: time="2025-11-08T00:35:29.919702935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:35:29.920540 containerd[2015]: time="2025-11-08T00:35:29.919728089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:29.920540 containerd[2015]: time="2025-11-08T00:35:29.919861510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:30.028171 containerd[2015]: time="2025-11-08T00:35:30.028049552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tmmhp,Uid:77d5a3a3-e22d-4eb1-b462-92484d58d55c,Namespace:kube-system,Attempt:1,} returns sandbox id \"e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54\"" Nov 8 00:35:30.033070 containerd[2015]: time="2025-11-08T00:35:30.032864450Z" level=info msg="CreateContainer within sandbox \"e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:35:30.106373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3187785213.mount: Deactivated successfully. Nov 8 00:35:30.113434 containerd[2015]: time="2025-11-08T00:35:30.113250793Z" level=info msg="CreateContainer within sandbox \"e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"909b3319c3ad8a40f1d19c17579ac05b9462950a5e0c9b8ef691ccdc508bdfaf\"" Nov 8 00:35:30.115616 containerd[2015]: time="2025-11-08T00:35:30.114513862Z" level=info msg="StartContainer for \"909b3319c3ad8a40f1d19c17579ac05b9462950a5e0c9b8ef691ccdc508bdfaf\"" Nov 8 00:35:30.251054 containerd[2015]: time="2025-11-08T00:35:30.240595258Z" level=info msg="StartContainer for \"909b3319c3ad8a40f1d19c17579ac05b9462950a5e0c9b8ef691ccdc508bdfaf\" returns successfully" Nov 8 00:35:30.403669 containerd[2015]: time="2025-11-08T00:35:30.399894944Z" level=info msg="StopPodSandbox for \"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75\"" Nov 8 00:35:30.404038 containerd[2015]: time="2025-11-08T00:35:30.403823125Z" level=info msg="StopPodSandbox for \"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9\"" Nov 8 00:35:30.410659 containerd[2015]: time="2025-11-08T00:35:30.410401005Z" level=info msg="StopPodSandbox for \"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e\"" Nov 8 00:35:30.417271 containerd[2015]: time="2025-11-08T00:35:30.410726874Z" level=info msg="StopPodSandbox for \"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda\"" Nov 8 00:35:30.420841 containerd[2015]: time="2025-11-08T00:35:30.413691825Z" level=info msg="StopPodSandbox for \"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415\"" Nov 8 00:35:30.421088 containerd[2015]: time="2025-11-08T00:35:30.415749101Z" level=info msg="StopPodSandbox for \"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce\"" Nov 8 00:35:30.714463 containerd[2015]: 2025-11-08 00:35:30.534 [INFO][5251] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" Nov 8 00:35:30.714463 containerd[2015]: 2025-11-08 00:35:30.534 [INFO][5251] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" iface="eth0" netns="/var/run/netns/cni-f6fb7c52-e51d-3530-8a9a-c3a82af003f0" Nov 8 00:35:30.714463 containerd[2015]: 2025-11-08 00:35:30.534 [INFO][5251] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" iface="eth0" netns="/var/run/netns/cni-f6fb7c52-e51d-3530-8a9a-c3a82af003f0" Nov 8 00:35:30.714463 containerd[2015]: 2025-11-08 00:35:30.535 [INFO][5251] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" iface="eth0" netns="/var/run/netns/cni-f6fb7c52-e51d-3530-8a9a-c3a82af003f0" Nov 8 00:35:30.714463 containerd[2015]: 2025-11-08 00:35:30.535 [INFO][5251] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" Nov 8 00:35:30.714463 containerd[2015]: 2025-11-08 00:35:30.535 [INFO][5251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" Nov 8 00:35:30.714463 containerd[2015]: 2025-11-08 00:35:30.693 [INFO][5280] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" HandleID="k8s-pod-network.c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" Workload="ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-eth0" Nov 8 00:35:30.714463 containerd[2015]: 2025-11-08 00:35:30.694 [INFO][5280] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:30.714463 containerd[2015]: 2025-11-08 00:35:30.694 [INFO][5280] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:30.714463 containerd[2015]: 2025-11-08 00:35:30.701 [WARNING][5280] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" HandleID="k8s-pod-network.c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" Workload="ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-eth0" Nov 8 00:35:30.714463 containerd[2015]: 2025-11-08 00:35:30.701 [INFO][5280] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" HandleID="k8s-pod-network.c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" Workload="ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-eth0" Nov 8 00:35:30.714463 containerd[2015]: 2025-11-08 00:35:30.703 [INFO][5280] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:30.714463 containerd[2015]: 2025-11-08 00:35:30.710 [INFO][5251] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" Nov 8 00:35:30.716692 containerd[2015]: time="2025-11-08T00:35:30.715959110Z" level=info msg="TearDown network for sandbox \"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce\" successfully" Nov 8 00:35:30.716692 containerd[2015]: time="2025-11-08T00:35:30.715987469Z" level=info msg="StopPodSandbox for \"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce\" returns successfully" Nov 8 00:35:30.719613 containerd[2015]: time="2025-11-08T00:35:30.719562572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wspq5,Uid:fa70c9d5-021d-43b0-b46a-b204a72a1b25,Namespace:kube-system,Attempt:1,}" Nov 8 00:35:30.916004 containerd[2015]: 2025-11-08 00:35:30.629 [INFO][5249] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" Nov 8 00:35:30.916004 containerd[2015]: 2025-11-08 00:35:30.633 [INFO][5249] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" iface="eth0" netns="/var/run/netns/cni-ef5384af-de60-a55c-85d7-69e6b1ef6814" Nov 8 00:35:30.916004 containerd[2015]: 2025-11-08 00:35:30.634 [INFO][5249] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" iface="eth0" netns="/var/run/netns/cni-ef5384af-de60-a55c-85d7-69e6b1ef6814" Nov 8 00:35:30.916004 containerd[2015]: 2025-11-08 00:35:30.635 [INFO][5249] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" iface="eth0" netns="/var/run/netns/cni-ef5384af-de60-a55c-85d7-69e6b1ef6814" Nov 8 00:35:30.916004 containerd[2015]: 2025-11-08 00:35:30.635 [INFO][5249] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" Nov 8 00:35:30.916004 containerd[2015]: 2025-11-08 00:35:30.635 [INFO][5249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" Nov 8 00:35:30.916004 containerd[2015]: 2025-11-08 00:35:30.803 [INFO][5302] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" HandleID="k8s-pod-network.daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" Workload="ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-eth0" Nov 8 00:35:30.916004 containerd[2015]: 2025-11-08 00:35:30.805 [INFO][5302] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:30.916004 containerd[2015]: 2025-11-08 00:35:30.812 [INFO][5302] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:30.916004 containerd[2015]: 2025-11-08 00:35:30.861 [WARNING][5302] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" HandleID="k8s-pod-network.daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" Workload="ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-eth0" Nov 8 00:35:30.916004 containerd[2015]: 2025-11-08 00:35:30.861 [INFO][5302] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" HandleID="k8s-pod-network.daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" Workload="ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-eth0" Nov 8 00:35:30.916004 containerd[2015]: 2025-11-08 00:35:30.865 [INFO][5302] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:30.916004 containerd[2015]: 2025-11-08 00:35:30.884 [INFO][5249] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" Nov 8 00:35:30.923074 containerd[2015]: time="2025-11-08T00:35:30.918090341Z" level=info msg="TearDown network for sandbox \"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e\" successfully" Nov 8 00:35:30.923074 containerd[2015]: time="2025-11-08T00:35:30.918137790Z" level=info msg="StopPodSandbox for \"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e\" returns successfully" Nov 8 00:35:30.923074 containerd[2015]: time="2025-11-08T00:35:30.921999188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d6dbcdb77-gwjrt,Uid:95719bb6-d015-4d14-97fc-c6a4da2f553e,Namespace:calico-system,Attempt:1,}" Nov 8 00:35:30.934512 containerd[2015]: 2025-11-08 00:35:30.618 [INFO][5257] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" Nov 8 00:35:30.934512 containerd[2015]: 2025-11-08 00:35:30.619 [INFO][5257] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" iface="eth0" netns="/var/run/netns/cni-18e256ab-db86-f770-5933-7ae1383099bc" Nov 8 00:35:30.934512 containerd[2015]: 2025-11-08 00:35:30.622 [INFO][5257] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" iface="eth0" netns="/var/run/netns/cni-18e256ab-db86-f770-5933-7ae1383099bc" Nov 8 00:35:30.934512 containerd[2015]: 2025-11-08 00:35:30.624 [INFO][5257] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" iface="eth0" netns="/var/run/netns/cni-18e256ab-db86-f770-5933-7ae1383099bc" Nov 8 00:35:30.934512 containerd[2015]: 2025-11-08 00:35:30.626 [INFO][5257] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" Nov 8 00:35:30.934512 containerd[2015]: 2025-11-08 00:35:30.626 [INFO][5257] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" Nov 8 00:35:30.934512 containerd[2015]: 2025-11-08 00:35:30.809 [INFO][5299] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" HandleID="k8s-pod-network.575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" Workload="ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-eth0" Nov 8 00:35:30.934512 containerd[2015]: 2025-11-08 00:35:30.809 [INFO][5299] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:30.934512 containerd[2015]: 2025-11-08 00:35:30.865 [INFO][5299] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:30.934512 containerd[2015]: 2025-11-08 00:35:30.893 [WARNING][5299] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" HandleID="k8s-pod-network.575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" Workload="ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-eth0" Nov 8 00:35:30.934512 containerd[2015]: 2025-11-08 00:35:30.894 [INFO][5299] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" HandleID="k8s-pod-network.575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" Workload="ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-eth0" Nov 8 00:35:30.934512 containerd[2015]: 2025-11-08 00:35:30.901 [INFO][5299] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:30.934512 containerd[2015]: 2025-11-08 00:35:30.922 [INFO][5257] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" Nov 8 00:35:30.941457 containerd[2015]: time="2025-11-08T00:35:30.940705454Z" level=info msg="TearDown network for sandbox \"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda\" successfully" Nov 8 00:35:30.941457 containerd[2015]: time="2025-11-08T00:35:30.940758236Z" level=info msg="StopPodSandbox for \"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda\" returns successfully" Nov 8 00:35:30.947677 containerd[2015]: time="2025-11-08T00:35:30.942027738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kblsp,Uid:941e9e80-4862-4e00-88e0-89c895eac1a2,Namespace:calico-system,Attempt:1,}" Nov 8 00:35:30.946998 systemd[1]: run-netns-cni\x2def5384af\x2dde60\x2da55c\x2d85d7\x2d69e6b1ef6814.mount: Deactivated successfully. Nov 8 00:35:30.947191 systemd[1]: run-netns-cni\x2df6fb7c52\x2de51d\x2d3530\x2d8a9a\x2dc3a82af003f0.mount: Deactivated successfully. Nov 8 00:35:30.964064 systemd[1]: run-netns-cni\x2d18e256ab\x2ddb86\x2df770\x2d5933\x2d7ae1383099bc.mount: Deactivated successfully. Nov 8 00:35:30.995485 containerd[2015]: 2025-11-08 00:35:30.598 [INFO][5229] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" Nov 8 00:35:30.995485 containerd[2015]: 2025-11-08 00:35:30.598 [INFO][5229] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" iface="eth0" netns="/var/run/netns/cni-6f2a526a-6d76-1bb0-370d-adae49d070b4" Nov 8 00:35:30.995485 containerd[2015]: 2025-11-08 00:35:30.598 [INFO][5229] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" iface="eth0" netns="/var/run/netns/cni-6f2a526a-6d76-1bb0-370d-adae49d070b4" Nov 8 00:35:30.995485 containerd[2015]: 2025-11-08 00:35:30.600 [INFO][5229] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" iface="eth0" netns="/var/run/netns/cni-6f2a526a-6d76-1bb0-370d-adae49d070b4" Nov 8 00:35:30.995485 containerd[2015]: 2025-11-08 00:35:30.600 [INFO][5229] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" Nov 8 00:35:30.995485 containerd[2015]: 2025-11-08 00:35:30.600 [INFO][5229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" Nov 8 00:35:30.995485 containerd[2015]: 2025-11-08 00:35:30.840 [INFO][5293] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" HandleID="k8s-pod-network.39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" Workload="ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-eth0" Nov 8 00:35:30.995485 containerd[2015]: 2025-11-08 00:35:30.841 [INFO][5293] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:30.995485 containerd[2015]: 2025-11-08 00:35:30.902 [INFO][5293] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:30.995485 containerd[2015]: 2025-11-08 00:35:30.914 [WARNING][5293] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" HandleID="k8s-pod-network.39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" Workload="ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-eth0" Nov 8 00:35:30.995485 containerd[2015]: 2025-11-08 00:35:30.914 [INFO][5293] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" HandleID="k8s-pod-network.39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" Workload="ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-eth0" Nov 8 00:35:30.995485 containerd[2015]: 2025-11-08 00:35:30.929 [INFO][5293] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:30.995485 containerd[2015]: 2025-11-08 00:35:30.970 [INFO][5229] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" Nov 8 00:35:31.005865 systemd[1]: run-netns-cni\x2d6f2a526a\x2d6d76\x2d1bb0\x2d370d\x2dadae49d070b4.mount: Deactivated successfully. Nov 8 00:35:31.010725 containerd[2015]: time="2025-11-08T00:35:31.010538941Z" level=info msg="TearDown network for sandbox \"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75\" successfully" Nov 8 00:35:31.014529 containerd[2015]: time="2025-11-08T00:35:31.014475201Z" level=info msg="StopPodSandbox for \"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75\" returns successfully" Nov 8 00:35:31.016367 containerd[2015]: time="2025-11-08T00:35:31.016322758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66479b5f68-gw447,Uid:f8a56697-875b-4b53-b7cd-550689e931a7,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:35:31.026274 containerd[2015]: 2025-11-08 00:35:30.651 [INFO][5252] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" Nov 8 00:35:31.026274 containerd[2015]: 2025-11-08 00:35:30.658 [INFO][5252] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" iface="eth0" netns="/var/run/netns/cni-a223e613-2112-f7ab-5a4f-8430a2b712c7" Nov 8 00:35:31.026274 containerd[2015]: 2025-11-08 00:35:30.660 [INFO][5252] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" iface="eth0" netns="/var/run/netns/cni-a223e613-2112-f7ab-5a4f-8430a2b712c7" Nov 8 00:35:31.026274 containerd[2015]: 2025-11-08 00:35:30.662 [INFO][5252] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" iface="eth0" netns="/var/run/netns/cni-a223e613-2112-f7ab-5a4f-8430a2b712c7" Nov 8 00:35:31.026274 containerd[2015]: 2025-11-08 00:35:30.662 [INFO][5252] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" Nov 8 00:35:31.026274 containerd[2015]: 2025-11-08 00:35:30.662 [INFO][5252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" Nov 8 00:35:31.026274 containerd[2015]: 2025-11-08 00:35:30.859 [INFO][5311] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" HandleID="k8s-pod-network.20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" Workload="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-eth0" Nov 8 00:35:31.026274 containerd[2015]: 2025-11-08 00:35:30.859 [INFO][5311] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:31.026274 containerd[2015]: 2025-11-08 00:35:30.961 [INFO][5311] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:31.026274 containerd[2015]: 2025-11-08 00:35:30.990 [WARNING][5311] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" HandleID="k8s-pod-network.20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" Workload="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-eth0" Nov 8 00:35:31.026274 containerd[2015]: 2025-11-08 00:35:30.991 [INFO][5311] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" HandleID="k8s-pod-network.20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" Workload="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-eth0" Nov 8 00:35:31.026274 containerd[2015]: 2025-11-08 00:35:30.993 [INFO][5311] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:31.026274 containerd[2015]: 2025-11-08 00:35:31.008 [INFO][5252] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" Nov 8 00:35:31.029445 containerd[2015]: time="2025-11-08T00:35:31.029018544Z" level=info msg="TearDown network for sandbox \"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415\" successfully" Nov 8 00:35:31.029523 containerd[2015]: time="2025-11-08T00:35:31.029477048Z" level=info msg="StopPodSandbox for \"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415\" returns successfully" Nov 8 00:35:31.032848 systemd[1]: run-netns-cni\x2da223e613\x2d2112\x2df7ab\x2d5a4f\x2d8430a2b712c7.mount: Deactivated successfully. Nov 8 00:35:31.036478 containerd[2015]: 2025-11-08 00:35:30.654 [INFO][5242] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" Nov 8 00:35:31.036478 containerd[2015]: 2025-11-08 00:35:30.658 [INFO][5242] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" iface="eth0" netns="/var/run/netns/cni-da7821a4-c50a-6f47-68ab-c2a04a12575b" Nov 8 00:35:31.036478 containerd[2015]: 2025-11-08 00:35:30.659 [INFO][5242] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" iface="eth0" netns="/var/run/netns/cni-da7821a4-c50a-6f47-68ab-c2a04a12575b" Nov 8 00:35:31.036478 containerd[2015]: 2025-11-08 00:35:30.659 [INFO][5242] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" iface="eth0" netns="/var/run/netns/cni-da7821a4-c50a-6f47-68ab-c2a04a12575b" Nov 8 00:35:31.036478 containerd[2015]: 2025-11-08 00:35:30.659 [INFO][5242] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" Nov 8 00:35:31.036478 containerd[2015]: 2025-11-08 00:35:30.659 [INFO][5242] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" Nov 8 00:35:31.036478 containerd[2015]: 2025-11-08 00:35:30.886 [INFO][5309] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" HandleID="k8s-pod-network.b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" Workload="ip--172--31--19--248-k8s-csi--node--driver--xqwhf-eth0" Nov 8 00:35:31.036478 containerd[2015]: 2025-11-08 00:35:30.889 [INFO][5309] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:31.036478 containerd[2015]: 2025-11-08 00:35:30.994 [INFO][5309] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:31.036478 containerd[2015]: 2025-11-08 00:35:31.006 [WARNING][5309] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" HandleID="k8s-pod-network.b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" Workload="ip--172--31--19--248-k8s-csi--node--driver--xqwhf-eth0" Nov 8 00:35:31.036478 containerd[2015]: 2025-11-08 00:35:31.006 [INFO][5309] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" HandleID="k8s-pod-network.b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" Workload="ip--172--31--19--248-k8s-csi--node--driver--xqwhf-eth0" Nov 8 00:35:31.036478 containerd[2015]: 2025-11-08 00:35:31.013 [INFO][5309] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:31.036478 containerd[2015]: 2025-11-08 00:35:31.021 [INFO][5242] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" Nov 8 00:35:31.037358 containerd[2015]: time="2025-11-08T00:35:31.037323150Z" level=info msg="TearDown network for sandbox \"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9\" successfully" Nov 8 00:35:31.037488 containerd[2015]: time="2025-11-08T00:35:31.037456258Z" level=info msg="StopPodSandbox for \"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9\" returns successfully" Nov 8 00:35:31.037748 containerd[2015]: time="2025-11-08T00:35:31.037721274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d787d64-qzb5q,Uid:a8220629-4d1c-4d1f-829d-7ab1eb924825,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:35:31.040747 containerd[2015]: time="2025-11-08T00:35:31.040511082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xqwhf,Uid:deabdfcd-c211-4fd0-a621-ac2732629dc7,Namespace:calico-system,Attempt:1,}" Nov 8 00:35:31.046946 systemd[1]: run-netns-cni\x2dda7821a4\x2dc50a\x2d6f47\x2d68ab\x2dc2a04a12575b.mount: Deactivated successfully. Nov 8 00:35:31.202044 systemd-networkd[1573]: vxlan.calico: Gained IPv6LL Nov 8 00:35:31.205128 (udev-worker)[5079]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:35:31.209404 systemd-networkd[1573]: cali46c9f73af32: Link UP Nov 8 00:35:31.212826 systemd-networkd[1573]: cali46c9f73af32: Gained carrier Nov 8 00:35:31.234700 kubelet[3251]: I1108 00:35:31.229809 3251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tmmhp" podStartSLOduration=44.229784617 podStartE2EDuration="44.229784617s" podCreationTimestamp="2025-11-08 00:34:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:35:30.883312402 +0000 UTC m=+48.663498385" watchObservedRunningTime="2025-11-08 00:35:31.229784617 +0000 UTC m=+49.009970572" Nov 8 00:35:31.250545 containerd[2015]: 2025-11-08 00:35:31.055 [INFO][5324] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-eth0 coredns-668d6bf9bc- kube-system fa70c9d5-021d-43b0-b46a-b204a72a1b25 1033 0 2025-11-08 00:34:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-248 coredns-668d6bf9bc-wspq5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali46c9f73af32 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de" Namespace="kube-system" Pod="coredns-668d6bf9bc-wspq5" WorkloadEndpoint="ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-" Nov 8 00:35:31.250545 containerd[2015]: 2025-11-08 00:35:31.055 [INFO][5324] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de" Namespace="kube-system" Pod="coredns-668d6bf9bc-wspq5" WorkloadEndpoint="ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-eth0" Nov 8 00:35:31.250545 containerd[2015]: 2025-11-08 00:35:31.101 [INFO][5350] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de" HandleID="k8s-pod-network.8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de" Workload="ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-eth0" Nov 8 00:35:31.250545 containerd[2015]: 2025-11-08 00:35:31.101 [INFO][5350] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de" HandleID="k8s-pod-network.8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de" Workload="ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf9b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-248", "pod":"coredns-668d6bf9bc-wspq5", "timestamp":"2025-11-08 00:35:31.101079913 +0000 UTC"}, Hostname:"ip-172-31-19-248", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:35:31.250545 containerd[2015]: 2025-11-08 00:35:31.101 [INFO][5350] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:31.250545 containerd[2015]: 2025-11-08 00:35:31.101 [INFO][5350] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:31.250545 containerd[2015]: 2025-11-08 00:35:31.101 [INFO][5350] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-248' Nov 8 00:35:31.250545 containerd[2015]: 2025-11-08 00:35:31.112 [INFO][5350] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de" host="ip-172-31-19-248" Nov 8 00:35:31.250545 containerd[2015]: 2025-11-08 00:35:31.121 [INFO][5350] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-248" Nov 8 00:35:31.250545 containerd[2015]: 2025-11-08 00:35:31.134 [INFO][5350] ipam/ipam.go 511: Trying affinity for 192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:31.250545 containerd[2015]: 2025-11-08 00:35:31.138 [INFO][5350] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:31.250545 containerd[2015]: 2025-11-08 00:35:31.147 [INFO][5350] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:31.250545 containerd[2015]: 2025-11-08 00:35:31.147 [INFO][5350] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.64/26 handle="k8s-pod-network.8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de" host="ip-172-31-19-248" Nov 8 00:35:31.250545 containerd[2015]: 2025-11-08 00:35:31.150 [INFO][5350] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de Nov 8 00:35:31.250545 containerd[2015]: 2025-11-08 00:35:31.162 [INFO][5350] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.64/26 handle="k8s-pod-network.8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de" host="ip-172-31-19-248" Nov 8 00:35:31.250545 containerd[2015]: 2025-11-08 00:35:31.178 [INFO][5350] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.67/26] block=192.168.7.64/26 handle="k8s-pod-network.8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de" host="ip-172-31-19-248" Nov 8 00:35:31.250545 containerd[2015]: 2025-11-08 00:35:31.178 [INFO][5350] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.67/26] handle="k8s-pod-network.8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de" host="ip-172-31-19-248" Nov 8 00:35:31.250545 containerd[2015]: 2025-11-08 00:35:31.178 [INFO][5350] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:31.250545 containerd[2015]: 2025-11-08 00:35:31.178 [INFO][5350] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.67/26] IPv6=[] ContainerID="8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de" HandleID="k8s-pod-network.8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de" Workload="ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-eth0" Nov 8 00:35:31.252956 containerd[2015]: 2025-11-08 00:35:31.193 [INFO][5324] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de" Namespace="kube-system" Pod="coredns-668d6bf9bc-wspq5" WorkloadEndpoint="ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fa70c9d5-021d-43b0-b46a-b204a72a1b25", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 34, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"", Pod:"coredns-668d6bf9bc-wspq5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46c9f73af32", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:31.252956 containerd[2015]: 2025-11-08 00:35:31.194 [INFO][5324] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.67/32] ContainerID="8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de" Namespace="kube-system" Pod="coredns-668d6bf9bc-wspq5" WorkloadEndpoint="ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-eth0" Nov 8 00:35:31.252956 containerd[2015]: 2025-11-08 00:35:31.194 [INFO][5324] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46c9f73af32 ContainerID="8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de" Namespace="kube-system" Pod="coredns-668d6bf9bc-wspq5" WorkloadEndpoint="ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-eth0" Nov 8 00:35:31.252956 containerd[2015]: 2025-11-08 00:35:31.211 [INFO][5324] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de" Namespace="kube-system" Pod="coredns-668d6bf9bc-wspq5" WorkloadEndpoint="ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-eth0" Nov 8 00:35:31.252956 containerd[2015]: 2025-11-08 00:35:31.212 [INFO][5324] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de" Namespace="kube-system" Pod="coredns-668d6bf9bc-wspq5" WorkloadEndpoint="ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fa70c9d5-021d-43b0-b46a-b204a72a1b25", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 34, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de", Pod:"coredns-668d6bf9bc-wspq5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46c9f73af32", MAC:"4a:1f:5d:1d:4a:83", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:31.252956 containerd[2015]: 2025-11-08 00:35:31.233 [INFO][5324] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de" Namespace="kube-system" Pod="coredns-668d6bf9bc-wspq5" WorkloadEndpoint="ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-eth0" Nov 8 00:35:31.393880 containerd[2015]: time="2025-11-08T00:35:31.392272089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:35:31.393880 containerd[2015]: time="2025-11-08T00:35:31.392366527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:35:31.393880 containerd[2015]: time="2025-11-08T00:35:31.392397502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:31.393880 containerd[2015]: time="2025-11-08T00:35:31.393347306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:31.585714 systemd-networkd[1573]: calia53c6904941: Gained IPv6LL Nov 8 00:35:31.629375 systemd-networkd[1573]: cali384330a73b0: Link UP Nov 8 00:35:31.634380 systemd-networkd[1573]: cali384330a73b0: Gained carrier Nov 8 00:35:31.738560 containerd[2015]: 2025-11-08 00:35:31.215 [INFO][5357] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-eth0 calico-kube-controllers-6d6dbcdb77- calico-system 95719bb6-d015-4d14-97fc-c6a4da2f553e 1036 0 2025-11-08 00:35:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d6dbcdb77 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-19-248 calico-kube-controllers-6d6dbcdb77-gwjrt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali384330a73b0 [] [] }} ContainerID="42ae027b5ebe0fcf4d5de9e8ee8f644540ea11f1ead83ac41505b216768781d7" Namespace="calico-system" Pod="calico-kube-controllers-6d6dbcdb77-gwjrt" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-" Nov 8 00:35:31.738560 containerd[2015]: 2025-11-08 00:35:31.220 [INFO][5357] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="42ae027b5ebe0fcf4d5de9e8ee8f644540ea11f1ead83ac41505b216768781d7" Namespace="calico-system" Pod="calico-kube-controllers-6d6dbcdb77-gwjrt" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-eth0" Nov 8 00:35:31.738560 containerd[2015]: 2025-11-08 00:35:31.483 [INFO][5423] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42ae027b5ebe0fcf4d5de9e8ee8f644540ea11f1ead83ac41505b216768781d7" HandleID="k8s-pod-network.42ae027b5ebe0fcf4d5de9e8ee8f644540ea11f1ead83ac41505b216768781d7" Workload="ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-eth0" Nov 8 00:35:31.738560 containerd[2015]: 2025-11-08 00:35:31.483 [INFO][5423] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="42ae027b5ebe0fcf4d5de9e8ee8f644540ea11f1ead83ac41505b216768781d7" HandleID="k8s-pod-network.42ae027b5ebe0fcf4d5de9e8ee8f644540ea11f1ead83ac41505b216768781d7" Workload="ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bdf60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-248", "pod":"calico-kube-controllers-6d6dbcdb77-gwjrt", "timestamp":"2025-11-08 00:35:31.483297561 +0000 UTC"}, Hostname:"ip-172-31-19-248", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:35:31.738560 containerd[2015]: 2025-11-08 00:35:31.484 [INFO][5423] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:31.738560 containerd[2015]: 2025-11-08 00:35:31.484 [INFO][5423] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:31.738560 containerd[2015]: 2025-11-08 00:35:31.484 [INFO][5423] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-248' Nov 8 00:35:31.738560 containerd[2015]: 2025-11-08 00:35:31.501 [INFO][5423] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.42ae027b5ebe0fcf4d5de9e8ee8f644540ea11f1ead83ac41505b216768781d7" host="ip-172-31-19-248" Nov 8 00:35:31.738560 containerd[2015]: 2025-11-08 00:35:31.524 [INFO][5423] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-248" Nov 8 00:35:31.738560 containerd[2015]: 2025-11-08 00:35:31.537 [INFO][5423] ipam/ipam.go 511: Trying affinity for 192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:31.738560 containerd[2015]: 2025-11-08 00:35:31.542 [INFO][5423] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:31.738560 containerd[2015]: 2025-11-08 00:35:31.547 [INFO][5423] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:31.738560 containerd[2015]: 2025-11-08 00:35:31.548 [INFO][5423] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.64/26 handle="k8s-pod-network.42ae027b5ebe0fcf4d5de9e8ee8f644540ea11f1ead83ac41505b216768781d7" host="ip-172-31-19-248" Nov 8 00:35:31.738560 containerd[2015]: 2025-11-08 00:35:31.552 [INFO][5423] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.42ae027b5ebe0fcf4d5de9e8ee8f644540ea11f1ead83ac41505b216768781d7 Nov 8 00:35:31.738560 containerd[2015]: 2025-11-08 00:35:31.563 [INFO][5423] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.64/26 handle="k8s-pod-network.42ae027b5ebe0fcf4d5de9e8ee8f644540ea11f1ead83ac41505b216768781d7" host="ip-172-31-19-248" Nov 8 00:35:31.738560 containerd[2015]: 2025-11-08 00:35:31.576 [INFO][5423] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.68/26] block=192.168.7.64/26 handle="k8s-pod-network.42ae027b5ebe0fcf4d5de9e8ee8f644540ea11f1ead83ac41505b216768781d7" host="ip-172-31-19-248" Nov 8 00:35:31.738560 containerd[2015]: 2025-11-08 00:35:31.577 [INFO][5423] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.68/26] handle="k8s-pod-network.42ae027b5ebe0fcf4d5de9e8ee8f644540ea11f1ead83ac41505b216768781d7" host="ip-172-31-19-248" Nov 8 00:35:31.738560 containerd[2015]: 2025-11-08 00:35:31.577 [INFO][5423] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:31.738560 containerd[2015]: 2025-11-08 00:35:31.577 [INFO][5423] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.68/26] IPv6=[] ContainerID="42ae027b5ebe0fcf4d5de9e8ee8f644540ea11f1ead83ac41505b216768781d7" HandleID="k8s-pod-network.42ae027b5ebe0fcf4d5de9e8ee8f644540ea11f1ead83ac41505b216768781d7" Workload="ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-eth0" Nov 8 00:35:31.740759 containerd[2015]: 2025-11-08 00:35:31.595 [INFO][5357] cni-plugin/k8s.go 418: Populated endpoint ContainerID="42ae027b5ebe0fcf4d5de9e8ee8f644540ea11f1ead83ac41505b216768781d7" Namespace="calico-system" Pod="calico-kube-controllers-6d6dbcdb77-gwjrt" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-eth0", GenerateName:"calico-kube-controllers-6d6dbcdb77-", Namespace:"calico-system", SelfLink:"", UID:"95719bb6-d015-4d14-97fc-c6a4da2f553e", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 35, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d6dbcdb77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"", Pod:"calico-kube-controllers-6d6dbcdb77-gwjrt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.7.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali384330a73b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:31.740759 containerd[2015]: 2025-11-08 00:35:31.595 [INFO][5357] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.68/32] ContainerID="42ae027b5ebe0fcf4d5de9e8ee8f644540ea11f1ead83ac41505b216768781d7" Namespace="calico-system" Pod="calico-kube-controllers-6d6dbcdb77-gwjrt" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-eth0" Nov 8 00:35:31.740759 containerd[2015]: 2025-11-08 00:35:31.595 [INFO][5357] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali384330a73b0 ContainerID="42ae027b5ebe0fcf4d5de9e8ee8f644540ea11f1ead83ac41505b216768781d7" Namespace="calico-system" Pod="calico-kube-controllers-6d6dbcdb77-gwjrt" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-eth0" Nov 8 00:35:31.740759 containerd[2015]: 2025-11-08 00:35:31.649 [INFO][5357] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="42ae027b5ebe0fcf4d5de9e8ee8f644540ea11f1ead83ac41505b216768781d7" Namespace="calico-system" Pod="calico-kube-controllers-6d6dbcdb77-gwjrt" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-eth0" Nov 8 00:35:31.740759 containerd[2015]: 2025-11-08 00:35:31.660 [INFO][5357] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="42ae027b5ebe0fcf4d5de9e8ee8f644540ea11f1ead83ac41505b216768781d7" Namespace="calico-system" Pod="calico-kube-controllers-6d6dbcdb77-gwjrt" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-eth0", GenerateName:"calico-kube-controllers-6d6dbcdb77-", Namespace:"calico-system", SelfLink:"", UID:"95719bb6-d015-4d14-97fc-c6a4da2f553e", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 35, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d6dbcdb77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"42ae027b5ebe0fcf4d5de9e8ee8f644540ea11f1ead83ac41505b216768781d7", Pod:"calico-kube-controllers-6d6dbcdb77-gwjrt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.7.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali384330a73b0", MAC:"02:d4:33:d3:ad:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:31.740759 containerd[2015]: 2025-11-08 00:35:31.696 [INFO][5357] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="42ae027b5ebe0fcf4d5de9e8ee8f644540ea11f1ead83ac41505b216768781d7" Namespace="calico-system" Pod="calico-kube-controllers-6d6dbcdb77-gwjrt" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-eth0" Nov 8 00:35:31.784670 systemd-networkd[1573]: caliae1d16fcc30: Link UP Nov 8 00:35:31.786232 systemd-networkd[1573]: caliae1d16fcc30: Gained carrier Nov 8 00:35:31.806517 containerd[2015]: time="2025-11-08T00:35:31.806457900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wspq5,Uid:fa70c9d5-021d-43b0-b46a-b204a72a1b25,Namespace:kube-system,Attempt:1,} returns sandbox id \"8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de\"" Nov 8 00:35:31.811663 containerd[2015]: time="2025-11-08T00:35:31.811583714Z" level=info msg="CreateContainer within sandbox \"8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:35:31.856323 containerd[2015]: 2025-11-08 00:35:31.370 [INFO][5377] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-eth0 calico-apiserver-76d787d64- calico-apiserver a8220629-4d1c-4d1f-829d-7ab1eb924825 1037 0 2025-11-08 00:34:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76d787d64 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-248 calico-apiserver-76d787d64-qzb5q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliae1d16fcc30 [] [] }} ContainerID="4acd159ea7cd59aff6bd3f776d26fcecc91cd1fb9559aae65a76e8ea8f2f7a5f" Namespace="calico-apiserver" Pod="calico-apiserver-76d787d64-qzb5q" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-" Nov 8 00:35:31.856323 containerd[2015]: 2025-11-08 00:35:31.372 [INFO][5377] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4acd159ea7cd59aff6bd3f776d26fcecc91cd1fb9559aae65a76e8ea8f2f7a5f" Namespace="calico-apiserver" Pod="calico-apiserver-76d787d64-qzb5q" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-eth0" Nov 8 00:35:31.856323 containerd[2015]: 2025-11-08 00:35:31.507 [INFO][5461] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4acd159ea7cd59aff6bd3f776d26fcecc91cd1fb9559aae65a76e8ea8f2f7a5f" HandleID="k8s-pod-network.4acd159ea7cd59aff6bd3f776d26fcecc91cd1fb9559aae65a76e8ea8f2f7a5f" Workload="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-eth0" Nov 8 00:35:31.856323 containerd[2015]: 2025-11-08 00:35:31.508 [INFO][5461] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4acd159ea7cd59aff6bd3f776d26fcecc91cd1fb9559aae65a76e8ea8f2f7a5f" HandleID="k8s-pod-network.4acd159ea7cd59aff6bd3f776d26fcecc91cd1fb9559aae65a76e8ea8f2f7a5f" Workload="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000353bb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-248", "pod":"calico-apiserver-76d787d64-qzb5q", "timestamp":"2025-11-08 00:35:31.507549087 +0000 UTC"}, Hostname:"ip-172-31-19-248", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:35:31.856323 containerd[2015]: 2025-11-08 00:35:31.508 [INFO][5461] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:31.856323 containerd[2015]: 2025-11-08 00:35:31.577 [INFO][5461] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:31.856323 containerd[2015]: 2025-11-08 00:35:31.577 [INFO][5461] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-248' Nov 8 00:35:31.856323 containerd[2015]: 2025-11-08 00:35:31.604 [INFO][5461] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4acd159ea7cd59aff6bd3f776d26fcecc91cd1fb9559aae65a76e8ea8f2f7a5f" host="ip-172-31-19-248" Nov 8 00:35:31.856323 containerd[2015]: 2025-11-08 00:35:31.643 [INFO][5461] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-248" Nov 8 00:35:31.856323 containerd[2015]: 2025-11-08 00:35:31.666 [INFO][5461] ipam/ipam.go 511: Trying affinity for 192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:31.856323 containerd[2015]: 2025-11-08 00:35:31.689 [INFO][5461] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:31.856323 containerd[2015]: 2025-11-08 00:35:31.706 [INFO][5461] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:31.856323 containerd[2015]: 2025-11-08 00:35:31.707 [INFO][5461] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.64/26 handle="k8s-pod-network.4acd159ea7cd59aff6bd3f776d26fcecc91cd1fb9559aae65a76e8ea8f2f7a5f" host="ip-172-31-19-248" Nov 8 00:35:31.856323 containerd[2015]: 2025-11-08 00:35:31.709 [INFO][5461] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4acd159ea7cd59aff6bd3f776d26fcecc91cd1fb9559aae65a76e8ea8f2f7a5f Nov 8 00:35:31.856323 containerd[2015]: 2025-11-08 00:35:31.720 [INFO][5461] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.64/26 handle="k8s-pod-network.4acd159ea7cd59aff6bd3f776d26fcecc91cd1fb9559aae65a76e8ea8f2f7a5f" host="ip-172-31-19-248" Nov 8 00:35:31.856323 containerd[2015]: 2025-11-08 00:35:31.736 [INFO][5461] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.69/26] block=192.168.7.64/26 handle="k8s-pod-network.4acd159ea7cd59aff6bd3f776d26fcecc91cd1fb9559aae65a76e8ea8f2f7a5f" host="ip-172-31-19-248" Nov 8 00:35:31.856323 containerd[2015]: 2025-11-08 00:35:31.736 [INFO][5461] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.69/26] handle="k8s-pod-network.4acd159ea7cd59aff6bd3f776d26fcecc91cd1fb9559aae65a76e8ea8f2f7a5f" host="ip-172-31-19-248" Nov 8 00:35:31.856323 containerd[2015]: 2025-11-08 00:35:31.736 [INFO][5461] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:31.856323 containerd[2015]: 2025-11-08 00:35:31.736 [INFO][5461] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.69/26] IPv6=[] ContainerID="4acd159ea7cd59aff6bd3f776d26fcecc91cd1fb9559aae65a76e8ea8f2f7a5f" HandleID="k8s-pod-network.4acd159ea7cd59aff6bd3f776d26fcecc91cd1fb9559aae65a76e8ea8f2f7a5f" Workload="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-eth0" Nov 8 00:35:31.857265 containerd[2015]: 2025-11-08 00:35:31.761 [INFO][5377] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4acd159ea7cd59aff6bd3f776d26fcecc91cd1fb9559aae65a76e8ea8f2f7a5f" Namespace="calico-apiserver" Pod="calico-apiserver-76d787d64-qzb5q" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-eth0", GenerateName:"calico-apiserver-76d787d64-", Namespace:"calico-apiserver", SelfLink:"", UID:"a8220629-4d1c-4d1f-829d-7ab1eb924825", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 34, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d787d64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"", Pod:"calico-apiserver-76d787d64-qzb5q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae1d16fcc30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:31.857265 containerd[2015]: 2025-11-08 00:35:31.767 [INFO][5377] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.69/32] ContainerID="4acd159ea7cd59aff6bd3f776d26fcecc91cd1fb9559aae65a76e8ea8f2f7a5f" Namespace="calico-apiserver" Pod="calico-apiserver-76d787d64-qzb5q" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-eth0" Nov 8 00:35:31.857265 containerd[2015]: 2025-11-08 00:35:31.768 [INFO][5377] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae1d16fcc30 ContainerID="4acd159ea7cd59aff6bd3f776d26fcecc91cd1fb9559aae65a76e8ea8f2f7a5f" Namespace="calico-apiserver" Pod="calico-apiserver-76d787d64-qzb5q" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-eth0" Nov 8 00:35:31.857265 containerd[2015]: 2025-11-08 00:35:31.791 [INFO][5377] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4acd159ea7cd59aff6bd3f776d26fcecc91cd1fb9559aae65a76e8ea8f2f7a5f" Namespace="calico-apiserver" Pod="calico-apiserver-76d787d64-qzb5q" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-eth0" Nov 8 00:35:31.857265 containerd[2015]: 2025-11-08 00:35:31.792 [INFO][5377] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4acd159ea7cd59aff6bd3f776d26fcecc91cd1fb9559aae65a76e8ea8f2f7a5f" Namespace="calico-apiserver" Pod="calico-apiserver-76d787d64-qzb5q" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-eth0", GenerateName:"calico-apiserver-76d787d64-", Namespace:"calico-apiserver", SelfLink:"", UID:"a8220629-4d1c-4d1f-829d-7ab1eb924825", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 34, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d787d64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"4acd159ea7cd59aff6bd3f776d26fcecc91cd1fb9559aae65a76e8ea8f2f7a5f", Pod:"calico-apiserver-76d787d64-qzb5q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae1d16fcc30", MAC:"ca:e9:c0:9e:49:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:31.857265 containerd[2015]: 2025-11-08 00:35:31.829 [INFO][5377] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4acd159ea7cd59aff6bd3f776d26fcecc91cd1fb9559aae65a76e8ea8f2f7a5f" Namespace="calico-apiserver" Pod="calico-apiserver-76d787d64-qzb5q" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-eth0" Nov 8 00:35:31.865763 containerd[2015]: time="2025-11-08T00:35:31.865502956Z" level=info msg="CreateContainer within sandbox \"8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4e0cbb9204884ca22ad421bed99456771e891579c74a9d6d58c7ef0e3bd68107\"" Nov 8 00:35:31.869413 containerd[2015]: time="2025-11-08T00:35:31.867317383Z" level=info msg="StartContainer for \"4e0cbb9204884ca22ad421bed99456771e891579c74a9d6d58c7ef0e3bd68107\"" Nov 8 00:35:31.879258 containerd[2015]: time="2025-11-08T00:35:31.878684686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:35:31.879258 containerd[2015]: time="2025-11-08T00:35:31.878774994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:35:31.879258 containerd[2015]: time="2025-11-08T00:35:31.878799574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:31.879258 containerd[2015]: time="2025-11-08T00:35:31.878984347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:31.939604 systemd-networkd[1573]: cali835e1439f02: Link UP Nov 8 00:35:31.942244 systemd-networkd[1573]: cali835e1439f02: Gained carrier Nov 8 00:35:32.021798 containerd[2015]: 2025-11-08 00:35:31.347 [INFO][5366] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-eth0 goldmane-666569f655- calico-system 941e9e80-4862-4e00-88e0-89c895eac1a2 1035 0 2025-11-08 00:35:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-19-248 goldmane-666569f655-kblsp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali835e1439f02 [] [] }} ContainerID="caa2ccaffc381a7f3b6441a03daff087745d6fdefc07f0d25bc74c95d8a082c8" Namespace="calico-system" Pod="goldmane-666569f655-kblsp" WorkloadEndpoint="ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-" Nov 8 00:35:32.021798 containerd[2015]: 2025-11-08 00:35:31.351 [INFO][5366] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="caa2ccaffc381a7f3b6441a03daff087745d6fdefc07f0d25bc74c95d8a082c8" Namespace="calico-system" Pod="goldmane-666569f655-kblsp" WorkloadEndpoint="ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-eth0" Nov 8 00:35:32.021798 containerd[2015]: 2025-11-08 00:35:31.596 [INFO][5449] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="caa2ccaffc381a7f3b6441a03daff087745d6fdefc07f0d25bc74c95d8a082c8" HandleID="k8s-pod-network.caa2ccaffc381a7f3b6441a03daff087745d6fdefc07f0d25bc74c95d8a082c8" Workload="ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-eth0" Nov 8 00:35:32.021798 containerd[2015]: 2025-11-08 00:35:31.601 [INFO][5449] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="caa2ccaffc381a7f3b6441a03daff087745d6fdefc07f0d25bc74c95d8a082c8" HandleID="k8s-pod-network.caa2ccaffc381a7f3b6441a03daff087745d6fdefc07f0d25bc74c95d8a082c8" Workload="ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a29b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-248", "pod":"goldmane-666569f655-kblsp", "timestamp":"2025-11-08 00:35:31.596474944 +0000 UTC"}, Hostname:"ip-172-31-19-248", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:35:32.021798 containerd[2015]: 2025-11-08 00:35:31.609 [INFO][5449] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:32.021798 containerd[2015]: 2025-11-08 00:35:31.739 [INFO][5449] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:32.021798 containerd[2015]: 2025-11-08 00:35:31.739 [INFO][5449] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-248' Nov 8 00:35:32.021798 containerd[2015]: 2025-11-08 00:35:31.768 [INFO][5449] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.caa2ccaffc381a7f3b6441a03daff087745d6fdefc07f0d25bc74c95d8a082c8" host="ip-172-31-19-248" Nov 8 00:35:32.021798 containerd[2015]: 2025-11-08 00:35:31.784 [INFO][5449] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-248" Nov 8 00:35:32.021798 containerd[2015]: 2025-11-08 00:35:31.806 [INFO][5449] ipam/ipam.go 511: Trying affinity for 192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:32.021798 containerd[2015]: 2025-11-08 00:35:31.821 [INFO][5449] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:32.021798 containerd[2015]: 2025-11-08 00:35:31.830 [INFO][5449] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:32.021798 containerd[2015]: 2025-11-08 00:35:31.830 [INFO][5449] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.64/26 handle="k8s-pod-network.caa2ccaffc381a7f3b6441a03daff087745d6fdefc07f0d25bc74c95d8a082c8" host="ip-172-31-19-248" Nov 8 00:35:32.021798 containerd[2015]: 2025-11-08 00:35:31.832 [INFO][5449] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.caa2ccaffc381a7f3b6441a03daff087745d6fdefc07f0d25bc74c95d8a082c8 Nov 8 00:35:32.021798 containerd[2015]: 2025-11-08 00:35:31.842 [INFO][5449] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.64/26 handle="k8s-pod-network.caa2ccaffc381a7f3b6441a03daff087745d6fdefc07f0d25bc74c95d8a082c8" host="ip-172-31-19-248" Nov 8 00:35:32.021798 containerd[2015]: 2025-11-08 00:35:31.862 [INFO][5449] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.70/26] block=192.168.7.64/26 handle="k8s-pod-network.caa2ccaffc381a7f3b6441a03daff087745d6fdefc07f0d25bc74c95d8a082c8" host="ip-172-31-19-248" Nov 8 00:35:32.021798 containerd[2015]: 2025-11-08 00:35:31.864 [INFO][5449] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.70/26] handle="k8s-pod-network.caa2ccaffc381a7f3b6441a03daff087745d6fdefc07f0d25bc74c95d8a082c8" host="ip-172-31-19-248" Nov 8 00:35:32.021798 containerd[2015]: 2025-11-08 00:35:31.864 [INFO][5449] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:32.021798 containerd[2015]: 2025-11-08 00:35:31.864 [INFO][5449] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.70/26] IPv6=[] ContainerID="caa2ccaffc381a7f3b6441a03daff087745d6fdefc07f0d25bc74c95d8a082c8" HandleID="k8s-pod-network.caa2ccaffc381a7f3b6441a03daff087745d6fdefc07f0d25bc74c95d8a082c8" Workload="ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-eth0" Nov 8 00:35:32.023140 containerd[2015]: 2025-11-08 00:35:31.886 [INFO][5366] cni-plugin/k8s.go 418: Populated endpoint ContainerID="caa2ccaffc381a7f3b6441a03daff087745d6fdefc07f0d25bc74c95d8a082c8" Namespace="calico-system" Pod="goldmane-666569f655-kblsp" WorkloadEndpoint="ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"941e9e80-4862-4e00-88e0-89c895eac1a2", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 35, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"", Pod:"goldmane-666569f655-kblsp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.7.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali835e1439f02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:32.023140 containerd[2015]: 2025-11-08 00:35:31.888 [INFO][5366] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.70/32] ContainerID="caa2ccaffc381a7f3b6441a03daff087745d6fdefc07f0d25bc74c95d8a082c8" Namespace="calico-system" Pod="goldmane-666569f655-kblsp" WorkloadEndpoint="ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-eth0" Nov 8 00:35:32.023140 containerd[2015]: 2025-11-08 00:35:31.888 [INFO][5366] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali835e1439f02 ContainerID="caa2ccaffc381a7f3b6441a03daff087745d6fdefc07f0d25bc74c95d8a082c8" Namespace="calico-system" Pod="goldmane-666569f655-kblsp" WorkloadEndpoint="ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-eth0" Nov 8 00:35:32.023140 containerd[2015]: 2025-11-08 00:35:31.946 [INFO][5366] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="caa2ccaffc381a7f3b6441a03daff087745d6fdefc07f0d25bc74c95d8a082c8" Namespace="calico-system" Pod="goldmane-666569f655-kblsp" WorkloadEndpoint="ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-eth0" Nov 8 00:35:32.023140 containerd[2015]: 2025-11-08 00:35:31.950 [INFO][5366] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="caa2ccaffc381a7f3b6441a03daff087745d6fdefc07f0d25bc74c95d8a082c8" Namespace="calico-system" Pod="goldmane-666569f655-kblsp" WorkloadEndpoint="ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"941e9e80-4862-4e00-88e0-89c895eac1a2", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 35, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"caa2ccaffc381a7f3b6441a03daff087745d6fdefc07f0d25bc74c95d8a082c8", Pod:"goldmane-666569f655-kblsp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.7.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali835e1439f02", MAC:"9e:21:bf:6a:9c:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:32.023140 containerd[2015]: 2025-11-08 00:35:31.990 [INFO][5366] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="caa2ccaffc381a7f3b6441a03daff087745d6fdefc07f0d25bc74c95d8a082c8" Namespace="calico-system" Pod="goldmane-666569f655-kblsp" WorkloadEndpoint="ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-eth0" Nov 8 00:35:32.035392 containerd[2015]: time="2025-11-08T00:35:32.035032432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:35:32.035392 containerd[2015]: time="2025-11-08T00:35:32.035135694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:35:32.035392 containerd[2015]: time="2025-11-08T00:35:32.035159922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:32.035392 containerd[2015]: time="2025-11-08T00:35:32.035325841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:32.162932 systemd-networkd[1573]: cali8785757fccd: Link UP Nov 8 00:35:32.181825 systemd-networkd[1573]: cali8785757fccd: Gained carrier Nov 8 00:35:32.250420 containerd[2015]: time="2025-11-08T00:35:32.249448035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d6dbcdb77-gwjrt,Uid:95719bb6-d015-4d14-97fc-c6a4da2f553e,Namespace:calico-system,Attempt:1,} returns sandbox id \"42ae027b5ebe0fcf4d5de9e8ee8f644540ea11f1ead83ac41505b216768781d7\"" Nov 8 00:35:32.253900 containerd[2015]: time="2025-11-08T00:35:32.249204236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:35:32.253900 containerd[2015]: time="2025-11-08T00:35:32.249300404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:35:32.253900 containerd[2015]: time="2025-11-08T00:35:32.249323213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:32.253900 containerd[2015]: time="2025-11-08T00:35:32.249437834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:32.268964 containerd[2015]: time="2025-11-08T00:35:32.268772508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:35:32.316961 containerd[2015]: 2025-11-08 00:35:31.419 [INFO][5392] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-eth0 calico-apiserver-66479b5f68- calico-apiserver f8a56697-875b-4b53-b7cd-550689e931a7 1034 0 2025-11-08 00:34:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66479b5f68 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-248 calico-apiserver-66479b5f68-gw447 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8785757fccd [] [] }} ContainerID="60a90f9ca595b7fdf443f7cb691f839f3b06008d6e16905ce33bcd8e985b8f3d" Namespace="calico-apiserver" Pod="calico-apiserver-66479b5f68-gw447" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-" Nov 8 00:35:32.316961 containerd[2015]: 2025-11-08 00:35:31.419 [INFO][5392] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60a90f9ca595b7fdf443f7cb691f839f3b06008d6e16905ce33bcd8e985b8f3d" Namespace="calico-apiserver" Pod="calico-apiserver-66479b5f68-gw447" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-eth0" Nov 8 00:35:32.316961 containerd[2015]: 2025-11-08 00:35:31.703 [INFO][5473] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60a90f9ca595b7fdf443f7cb691f839f3b06008d6e16905ce33bcd8e985b8f3d" HandleID="k8s-pod-network.60a90f9ca595b7fdf443f7cb691f839f3b06008d6e16905ce33bcd8e985b8f3d" Workload="ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-eth0" Nov 8 00:35:32.316961 containerd[2015]: 2025-11-08 00:35:31.704 [INFO][5473] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="60a90f9ca595b7fdf443f7cb691f839f3b06008d6e16905ce33bcd8e985b8f3d" HandleID="k8s-pod-network.60a90f9ca595b7fdf443f7cb691f839f3b06008d6e16905ce33bcd8e985b8f3d" Workload="ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000367960), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-248", "pod":"calico-apiserver-66479b5f68-gw447", "timestamp":"2025-11-08 00:35:31.70322797 +0000 UTC"}, Hostname:"ip-172-31-19-248", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:35:32.316961 containerd[2015]: 2025-11-08 00:35:31.704 [INFO][5473] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:32.316961 containerd[2015]: 2025-11-08 00:35:31.866 [INFO][5473] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:32.316961 containerd[2015]: 2025-11-08 00:35:31.866 [INFO][5473] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-248' Nov 8 00:35:32.316961 containerd[2015]: 2025-11-08 00:35:31.917 [INFO][5473] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.60a90f9ca595b7fdf443f7cb691f839f3b06008d6e16905ce33bcd8e985b8f3d" host="ip-172-31-19-248" Nov 8 00:35:32.316961 containerd[2015]: 2025-11-08 00:35:31.963 [INFO][5473] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-248" Nov 8 00:35:32.316961 containerd[2015]: 2025-11-08 00:35:32.010 [INFO][5473] ipam/ipam.go 511: Trying affinity for 192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:32.316961 containerd[2015]: 2025-11-08 00:35:32.017 [INFO][5473] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:32.316961 containerd[2015]: 2025-11-08 00:35:32.049 [INFO][5473] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:32.316961 containerd[2015]: 2025-11-08 00:35:32.049 [INFO][5473] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.64/26 handle="k8s-pod-network.60a90f9ca595b7fdf443f7cb691f839f3b06008d6e16905ce33bcd8e985b8f3d" host="ip-172-31-19-248" Nov 8 00:35:32.316961 containerd[2015]: 2025-11-08 00:35:32.070 [INFO][5473] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.60a90f9ca595b7fdf443f7cb691f839f3b06008d6e16905ce33bcd8e985b8f3d Nov 8 00:35:32.316961 containerd[2015]: 2025-11-08 00:35:32.094 [INFO][5473] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.64/26 handle="k8s-pod-network.60a90f9ca595b7fdf443f7cb691f839f3b06008d6e16905ce33bcd8e985b8f3d" host="ip-172-31-19-248" Nov 8 00:35:32.316961 containerd[2015]: 2025-11-08 00:35:32.117 [INFO][5473] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.71/26] block=192.168.7.64/26 handle="k8s-pod-network.60a90f9ca595b7fdf443f7cb691f839f3b06008d6e16905ce33bcd8e985b8f3d" host="ip-172-31-19-248" Nov 8 00:35:32.316961 containerd[2015]: 2025-11-08 00:35:32.117 [INFO][5473] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.71/26] handle="k8s-pod-network.60a90f9ca595b7fdf443f7cb691f839f3b06008d6e16905ce33bcd8e985b8f3d" host="ip-172-31-19-248" Nov 8 00:35:32.316961 containerd[2015]: 2025-11-08 00:35:32.117 [INFO][5473] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:32.316961 containerd[2015]: 2025-11-08 00:35:32.117 [INFO][5473] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.71/26] IPv6=[] ContainerID="60a90f9ca595b7fdf443f7cb691f839f3b06008d6e16905ce33bcd8e985b8f3d" HandleID="k8s-pod-network.60a90f9ca595b7fdf443f7cb691f839f3b06008d6e16905ce33bcd8e985b8f3d" Workload="ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-eth0" Nov 8 00:35:32.319214 containerd[2015]: 2025-11-08 00:35:32.141 [INFO][5392] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60a90f9ca595b7fdf443f7cb691f839f3b06008d6e16905ce33bcd8e985b8f3d" Namespace="calico-apiserver" Pod="calico-apiserver-66479b5f68-gw447" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-eth0", GenerateName:"calico-apiserver-66479b5f68-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8a56697-875b-4b53-b7cd-550689e931a7", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 34, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66479b5f68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"", Pod:"calico-apiserver-66479b5f68-gw447", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8785757fccd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:32.319214 containerd[2015]: 2025-11-08 00:35:32.142 [INFO][5392] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.71/32] ContainerID="60a90f9ca595b7fdf443f7cb691f839f3b06008d6e16905ce33bcd8e985b8f3d" Namespace="calico-apiserver" Pod="calico-apiserver-66479b5f68-gw447" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-eth0" Nov 8 00:35:32.319214 containerd[2015]: 2025-11-08 00:35:32.143 [INFO][5392] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8785757fccd ContainerID="60a90f9ca595b7fdf443f7cb691f839f3b06008d6e16905ce33bcd8e985b8f3d" Namespace="calico-apiserver" Pod="calico-apiserver-66479b5f68-gw447" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-eth0" Nov 8 00:35:32.319214 containerd[2015]: 2025-11-08 00:35:32.183 [INFO][5392] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60a90f9ca595b7fdf443f7cb691f839f3b06008d6e16905ce33bcd8e985b8f3d" Namespace="calico-apiserver" Pod="calico-apiserver-66479b5f68-gw447" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-eth0" Nov 8 00:35:32.319214 containerd[2015]: 2025-11-08 00:35:32.184 [INFO][5392] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60a90f9ca595b7fdf443f7cb691f839f3b06008d6e16905ce33bcd8e985b8f3d" Namespace="calico-apiserver" Pod="calico-apiserver-66479b5f68-gw447" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-eth0", GenerateName:"calico-apiserver-66479b5f68-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8a56697-875b-4b53-b7cd-550689e931a7", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 34, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66479b5f68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"60a90f9ca595b7fdf443f7cb691f839f3b06008d6e16905ce33bcd8e985b8f3d", Pod:"calico-apiserver-66479b5f68-gw447", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8785757fccd", MAC:"9a:45:70:05:66:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:32.319214 containerd[2015]: 2025-11-08 00:35:32.248 [INFO][5392] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60a90f9ca595b7fdf443f7cb691f839f3b06008d6e16905ce33bcd8e985b8f3d" Namespace="calico-apiserver" Pod="calico-apiserver-66479b5f68-gw447" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-eth0" Nov 8 00:35:32.416925 systemd-networkd[1573]: calicf30e1d07d2: Link UP Nov 8 00:35:32.423912 systemd-networkd[1573]: calicf30e1d07d2: Gained carrier Nov 8 00:35:32.442279 containerd[2015]: time="2025-11-08T00:35:32.442143581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d787d64-qzb5q,Uid:a8220629-4d1c-4d1f-829d-7ab1eb924825,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4acd159ea7cd59aff6bd3f776d26fcecc91cd1fb9559aae65a76e8ea8f2f7a5f\"" Nov 8 00:35:32.495646 containerd[2015]: time="2025-11-08T00:35:32.495366278Z" level=info msg="StartContainer for \"4e0cbb9204884ca22ad421bed99456771e891579c74a9d6d58c7ef0e3bd68107\" returns successfully" Nov 8 00:35:32.505453 containerd[2015]: 2025-11-08 00:35:31.463 [INFO][5394] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--248-k8s-csi--node--driver--xqwhf-eth0 csi-node-driver- calico-system deabdfcd-c211-4fd0-a621-ac2732629dc7 1038 0 2025-11-08 00:35:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-19-248 csi-node-driver-xqwhf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicf30e1d07d2 [] [] }} ContainerID="a3ffb9f33f64e2d83f03ca5c0ce0e8ae3f1051cf24b4b5a1af7d78ad36c78ef8" Namespace="calico-system" Pod="csi-node-driver-xqwhf" WorkloadEndpoint="ip--172--31--19--248-k8s-csi--node--driver--xqwhf-" Nov 8 00:35:32.505453 containerd[2015]: 2025-11-08 00:35:31.464 [INFO][5394] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a3ffb9f33f64e2d83f03ca5c0ce0e8ae3f1051cf24b4b5a1af7d78ad36c78ef8" Namespace="calico-system" Pod="csi-node-driver-xqwhf" WorkloadEndpoint="ip--172--31--19--248-k8s-csi--node--driver--xqwhf-eth0" Nov 8 00:35:32.505453 containerd[2015]: 2025-11-08 00:35:31.732 [INFO][5485] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3ffb9f33f64e2d83f03ca5c0ce0e8ae3f1051cf24b4b5a1af7d78ad36c78ef8" HandleID="k8s-pod-network.a3ffb9f33f64e2d83f03ca5c0ce0e8ae3f1051cf24b4b5a1af7d78ad36c78ef8" Workload="ip--172--31--19--248-k8s-csi--node--driver--xqwhf-eth0" Nov 8 00:35:32.505453 containerd[2015]: 2025-11-08 00:35:31.733 [INFO][5485] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a3ffb9f33f64e2d83f03ca5c0ce0e8ae3f1051cf24b4b5a1af7d78ad36c78ef8" HandleID="k8s-pod-network.a3ffb9f33f64e2d83f03ca5c0ce0e8ae3f1051cf24b4b5a1af7d78ad36c78ef8" Workload="ip--172--31--19--248-k8s-csi--node--driver--xqwhf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c54f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-248", "pod":"csi-node-driver-xqwhf", "timestamp":"2025-11-08 00:35:31.732747457 +0000 UTC"}, Hostname:"ip-172-31-19-248", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:35:32.505453 containerd[2015]: 2025-11-08 00:35:31.733 [INFO][5485] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:32.505453 containerd[2015]: 2025-11-08 00:35:32.117 [INFO][5485] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:32.505453 containerd[2015]: 2025-11-08 00:35:32.117 [INFO][5485] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-248' Nov 8 00:35:32.505453 containerd[2015]: 2025-11-08 00:35:32.137 [INFO][5485] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a3ffb9f33f64e2d83f03ca5c0ce0e8ae3f1051cf24b4b5a1af7d78ad36c78ef8" host="ip-172-31-19-248" Nov 8 00:35:32.505453 containerd[2015]: 2025-11-08 00:35:32.152 [INFO][5485] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-248" Nov 8 00:35:32.505453 containerd[2015]: 2025-11-08 00:35:32.184 [INFO][5485] ipam/ipam.go 511: Trying affinity for 192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:32.505453 containerd[2015]: 2025-11-08 00:35:32.191 [INFO][5485] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:32.505453 containerd[2015]: 2025-11-08 00:35:32.218 [INFO][5485] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:32.505453 containerd[2015]: 2025-11-08 00:35:32.218 [INFO][5485] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.64/26 handle="k8s-pod-network.a3ffb9f33f64e2d83f03ca5c0ce0e8ae3f1051cf24b4b5a1af7d78ad36c78ef8" host="ip-172-31-19-248" Nov 8 00:35:32.505453 containerd[2015]: 2025-11-08 00:35:32.242 [INFO][5485] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a3ffb9f33f64e2d83f03ca5c0ce0e8ae3f1051cf24b4b5a1af7d78ad36c78ef8 Nov 8 00:35:32.505453 containerd[2015]: 2025-11-08 00:35:32.274 [INFO][5485] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.64/26 handle="k8s-pod-network.a3ffb9f33f64e2d83f03ca5c0ce0e8ae3f1051cf24b4b5a1af7d78ad36c78ef8" host="ip-172-31-19-248" Nov 8 00:35:32.505453 containerd[2015]: 2025-11-08 00:35:32.331 [INFO][5485] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.72/26] block=192.168.7.64/26 handle="k8s-pod-network.a3ffb9f33f64e2d83f03ca5c0ce0e8ae3f1051cf24b4b5a1af7d78ad36c78ef8" host="ip-172-31-19-248" Nov 8 00:35:32.505453 containerd[2015]: 2025-11-08 00:35:32.331 [INFO][5485] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.72/26] handle="k8s-pod-network.a3ffb9f33f64e2d83f03ca5c0ce0e8ae3f1051cf24b4b5a1af7d78ad36c78ef8" host="ip-172-31-19-248" Nov 8 00:35:32.505453 containerd[2015]: 2025-11-08 00:35:32.331 [INFO][5485] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:32.505453 containerd[2015]: 2025-11-08 00:35:32.331 [INFO][5485] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.72/26] IPv6=[] ContainerID="a3ffb9f33f64e2d83f03ca5c0ce0e8ae3f1051cf24b4b5a1af7d78ad36c78ef8" HandleID="k8s-pod-network.a3ffb9f33f64e2d83f03ca5c0ce0e8ae3f1051cf24b4b5a1af7d78ad36c78ef8" Workload="ip--172--31--19--248-k8s-csi--node--driver--xqwhf-eth0" Nov 8 00:35:32.508199 containerd[2015]: 2025-11-08 00:35:32.384 [INFO][5394] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a3ffb9f33f64e2d83f03ca5c0ce0e8ae3f1051cf24b4b5a1af7d78ad36c78ef8" Namespace="calico-system" Pod="csi-node-driver-xqwhf" WorkloadEndpoint="ip--172--31--19--248-k8s-csi--node--driver--xqwhf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-csi--node--driver--xqwhf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"deabdfcd-c211-4fd0-a621-ac2732629dc7", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 35, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"", Pod:"csi-node-driver-xqwhf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.7.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicf30e1d07d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:32.508199 containerd[2015]: 2025-11-08 00:35:32.384 [INFO][5394] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.72/32] ContainerID="a3ffb9f33f64e2d83f03ca5c0ce0e8ae3f1051cf24b4b5a1af7d78ad36c78ef8" Namespace="calico-system" Pod="csi-node-driver-xqwhf" WorkloadEndpoint="ip--172--31--19--248-k8s-csi--node--driver--xqwhf-eth0" Nov 8 00:35:32.508199 containerd[2015]: 2025-11-08 00:35:32.384 [INFO][5394] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf30e1d07d2 ContainerID="a3ffb9f33f64e2d83f03ca5c0ce0e8ae3f1051cf24b4b5a1af7d78ad36c78ef8" Namespace="calico-system" Pod="csi-node-driver-xqwhf" WorkloadEndpoint="ip--172--31--19--248-k8s-csi--node--driver--xqwhf-eth0" Nov 8 00:35:32.508199 containerd[2015]: 2025-11-08 00:35:32.431 [INFO][5394] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3ffb9f33f64e2d83f03ca5c0ce0e8ae3f1051cf24b4b5a1af7d78ad36c78ef8" Namespace="calico-system" Pod="csi-node-driver-xqwhf" WorkloadEndpoint="ip--172--31--19--248-k8s-csi--node--driver--xqwhf-eth0" Nov 8 00:35:32.508199 containerd[2015]: 2025-11-08 00:35:32.440 [INFO][5394] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a3ffb9f33f64e2d83f03ca5c0ce0e8ae3f1051cf24b4b5a1af7d78ad36c78ef8" Namespace="calico-system" Pod="csi-node-driver-xqwhf" WorkloadEndpoint="ip--172--31--19--248-k8s-csi--node--driver--xqwhf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-csi--node--driver--xqwhf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"deabdfcd-c211-4fd0-a621-ac2732629dc7", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 35, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"a3ffb9f33f64e2d83f03ca5c0ce0e8ae3f1051cf24b4b5a1af7d78ad36c78ef8", Pod:"csi-node-driver-xqwhf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.7.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicf30e1d07d2", MAC:"aa:7a:25:1c:30:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:32.508199 containerd[2015]: 2025-11-08 00:35:32.481 [INFO][5394] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a3ffb9f33f64e2d83f03ca5c0ce0e8ae3f1051cf24b4b5a1af7d78ad36c78ef8" Namespace="calico-system" Pod="csi-node-driver-xqwhf" WorkloadEndpoint="ip--172--31--19--248-k8s-csi--node--driver--xqwhf-eth0" Nov 8 00:35:32.557335 systemd[1]: Started sshd@8-172.31.19.248:22-139.178.89.65:38816.service - OpenSSH per-connection server daemon (139.178.89.65:38816). Nov 8 00:35:32.563758 containerd[2015]: time="2025-11-08T00:35:32.563186991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:35:32.574872 containerd[2015]: time="2025-11-08T00:35:32.563272159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:35:32.574872 containerd[2015]: time="2025-11-08T00:35:32.574386450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:32.574872 containerd[2015]: time="2025-11-08T00:35:32.574511038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:32.602672 containerd[2015]: time="2025-11-08T00:35:32.586242342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:35:32.602672 containerd[2015]: time="2025-11-08T00:35:32.602322108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:35:32.602672 containerd[2015]: time="2025-11-08T00:35:32.602359400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:32.602672 containerd[2015]: time="2025-11-08T00:35:32.602522337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:32.622320 containerd[2015]: time="2025-11-08T00:35:32.621605439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kblsp,Uid:941e9e80-4862-4e00-88e0-89c895eac1a2,Namespace:calico-system,Attempt:1,} returns sandbox id \"caa2ccaffc381a7f3b6441a03daff087745d6fdefc07f0d25bc74c95d8a082c8\"" Nov 8 00:35:32.732539 containerd[2015]: time="2025-11-08T00:35:32.731913208Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:32.733735 containerd[2015]: time="2025-11-08T00:35:32.733683384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xqwhf,Uid:deabdfcd-c211-4fd0-a621-ac2732629dc7,Namespace:calico-system,Attempt:1,} returns sandbox id \"a3ffb9f33f64e2d83f03ca5c0ce0e8ae3f1051cf24b4b5a1af7d78ad36c78ef8\"" Nov 8 00:35:32.735088 containerd[2015]: time="2025-11-08T00:35:32.735035980Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:35:32.735186 containerd[2015]: time="2025-11-08T00:35:32.735152018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:35:32.737403 kubelet[3251]: E1108 00:35:32.737318 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:35:32.737403 kubelet[3251]: E1108 00:35:32.737387 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:35:32.740120 containerd[2015]: time="2025-11-08T00:35:32.740001763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66479b5f68-gw447,Uid:f8a56697-875b-4b53-b7cd-550689e931a7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"60a90f9ca595b7fdf443f7cb691f839f3b06008d6e16905ce33bcd8e985b8f3d\"" Nov 8 00:35:32.741056 containerd[2015]: time="2025-11-08T00:35:32.741030551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:35:32.749849 kubelet[3251]: E1108 00:35:32.748747 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7nz66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d6dbcdb77-gwjrt_calico-system(95719bb6-d015-4d14-97fc-c6a4da2f553e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:32.751258 kubelet[3251]: E1108 00:35:32.751090 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d6dbcdb77-gwjrt" podUID="95719bb6-d015-4d14-97fc-c6a4da2f553e" Nov 8 00:35:32.817637 sshd[5743]: Accepted publickey for core from 139.178.89.65 port 38816 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:35:32.822104 sshd[5743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:35:32.830011 systemd-logind[1990]: New session 9 of user core. Nov 8 00:35:32.838000 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:35:32.878992 kubelet[3251]: E1108 00:35:32.876297 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d6dbcdb77-gwjrt" podUID="95719bb6-d015-4d14-97fc-c6a4da2f553e" Nov 8 00:35:32.932343 systemd-networkd[1573]: cali384330a73b0: Gained IPv6LL Nov 8 00:35:32.960535 kubelet[3251]: I1108 00:35:32.943114 3251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wspq5" podStartSLOduration=45.943045259 podStartE2EDuration="45.943045259s" podCreationTimestamp="2025-11-08 00:34:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:35:32.936045322 +0000 UTC m=+50.716231271" watchObservedRunningTime="2025-11-08 00:35:32.943045259 +0000 UTC m=+50.723231214" Nov 8 00:35:33.055829 systemd-networkd[1573]: cali46c9f73af32: Gained IPv6LL Nov 8 00:35:33.091268 containerd[2015]: time="2025-11-08T00:35:33.091219728Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:33.093859 containerd[2015]: time="2025-11-08T00:35:33.093669345Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:35:33.093859 containerd[2015]: time="2025-11-08T00:35:33.093727206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:35:33.095199 kubelet[3251]: E1108 00:35:33.095030 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:35:33.095199 kubelet[3251]: E1108 00:35:33.095095 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:35:33.095444 kubelet[3251]: E1108 00:35:33.095314 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8h6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76d787d64-qzb5q_calico-apiserver(a8220629-4d1c-4d1f-829d-7ab1eb924825): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:33.095788 containerd[2015]: time="2025-11-08T00:35:33.095760353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:35:33.097455 kubelet[3251]: E1108 00:35:33.096473 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d787d64-qzb5q" podUID="a8220629-4d1c-4d1f-829d-7ab1eb924825" Nov 8 00:35:33.377766 systemd-networkd[1573]: cali835e1439f02: Gained IPv6LL Nov 8 00:35:33.387959 containerd[2015]: time="2025-11-08T00:35:33.387899933Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:33.388960 sshd[5743]: pam_unix(sshd:session): session closed for user core Nov 8 00:35:33.392015 containerd[2015]: time="2025-11-08T00:35:33.390383338Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:35:33.392015 containerd[2015]: time="2025-11-08T00:35:33.390486951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:35:33.393181 systemd[1]: sshd@8-172.31.19.248:22-139.178.89.65:38816.service: Deactivated successfully. Nov 8 00:35:33.399737 kubelet[3251]: E1108 00:35:33.390663 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:35:33.399737 kubelet[3251]: E1108 00:35:33.399381 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:35:33.400608 kubelet[3251]: E1108 00:35:33.400172 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dcgbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kblsp_calico-system(941e9e80-4862-4e00-88e0-89c895eac1a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:33.402374 systemd-logind[1990]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:35:33.404120 kubelet[3251]: E1108 00:35:33.402989 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kblsp" podUID="941e9e80-4862-4e00-88e0-89c895eac1a2" Nov 8 00:35:33.403550 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:35:33.405130 containerd[2015]: time="2025-11-08T00:35:33.404578715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:35:33.406202 containerd[2015]: time="2025-11-08T00:35:33.406008791Z" level=info msg="StopPodSandbox for \"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9\"" Nov 8 00:35:33.407749 systemd-logind[1990]: Removed session 9. Nov 8 00:35:33.500425 containerd[2015]: 2025-11-08 00:35:33.460 [INFO][5834] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" Nov 8 00:35:33.500425 containerd[2015]: 2025-11-08 00:35:33.460 [INFO][5834] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" iface="eth0" netns="/var/run/netns/cni-494f05a9-4b01-cc74-c5c8-219c92ef8d09" Nov 8 00:35:33.500425 containerd[2015]: 2025-11-08 00:35:33.461 [INFO][5834] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" iface="eth0" netns="/var/run/netns/cni-494f05a9-4b01-cc74-c5c8-219c92ef8d09" Nov 8 00:35:33.500425 containerd[2015]: 2025-11-08 00:35:33.461 [INFO][5834] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" iface="eth0" netns="/var/run/netns/cni-494f05a9-4b01-cc74-c5c8-219c92ef8d09" Nov 8 00:35:33.500425 containerd[2015]: 2025-11-08 00:35:33.461 [INFO][5834] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" Nov 8 00:35:33.500425 containerd[2015]: 2025-11-08 00:35:33.461 [INFO][5834] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" Nov 8 00:35:33.500425 containerd[2015]: 2025-11-08 00:35:33.485 [INFO][5841] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" HandleID="k8s-pod-network.8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" Workload="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-eth0" Nov 8 00:35:33.500425 containerd[2015]: 2025-11-08 00:35:33.485 [INFO][5841] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:33.500425 containerd[2015]: 2025-11-08 00:35:33.486 [INFO][5841] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:33.500425 containerd[2015]: 2025-11-08 00:35:33.494 [WARNING][5841] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" HandleID="k8s-pod-network.8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" Workload="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-eth0" Nov 8 00:35:33.500425 containerd[2015]: 2025-11-08 00:35:33.494 [INFO][5841] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" HandleID="k8s-pod-network.8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" Workload="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-eth0" Nov 8 00:35:33.500425 containerd[2015]: 2025-11-08 00:35:33.496 [INFO][5841] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:33.500425 containerd[2015]: 2025-11-08 00:35:33.498 [INFO][5834] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" Nov 8 00:35:33.503292 containerd[2015]: time="2025-11-08T00:35:33.500743929Z" level=info msg="TearDown network for sandbox \"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9\" successfully" Nov 8 00:35:33.503292 containerd[2015]: time="2025-11-08T00:35:33.500770187Z" level=info msg="StopPodSandbox for \"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9\" returns successfully" Nov 8 00:35:33.503292 containerd[2015]: time="2025-11-08T00:35:33.502885074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d787d64-blwxn,Uid:3efb6459-6c10-4cbc-8627-eed63b51acf1,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:35:33.505572 systemd[1]: run-netns-cni\x2d494f05a9\x2d4b01\x2dcc74\x2dc5c8\x2d219c92ef8d09.mount: Deactivated successfully. Nov 8 00:35:33.688454 systemd-networkd[1573]: cali288e7f93a5a: Link UP Nov 8 00:35:33.689386 systemd-networkd[1573]: cali288e7f93a5a: Gained carrier Nov 8 00:35:33.695842 systemd-networkd[1573]: caliae1d16fcc30: Gained IPv6LL Nov 8 00:35:33.698231 containerd[2015]: time="2025-11-08T00:35:33.698183611Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:33.700514 containerd[2015]: time="2025-11-08T00:35:33.700448519Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:35:33.700816 containerd[2015]: time="2025-11-08T00:35:33.700502417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:35:33.701024 kubelet[3251]: E1108 00:35:33.700918 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:35:33.702405 kubelet[3251]: E1108 00:35:33.701067 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:35:33.703529 kubelet[3251]: E1108 00:35:33.702715 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l858l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xqwhf_calico-system(deabdfcd-c211-4fd0-a621-ac2732629dc7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:33.703822 containerd[2015]: time="2025-11-08T00:35:33.702921026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:35:33.710749 containerd[2015]: 2025-11-08 00:35:33.573 [INFO][5848] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-eth0 calico-apiserver-76d787d64- calico-apiserver 3efb6459-6c10-4cbc-8627-eed63b51acf1 1101 0 2025-11-08 00:34:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76d787d64 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-248 calico-apiserver-76d787d64-blwxn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali288e7f93a5a [] [] }} ContainerID="75914e329a3763f0ad317dd7cafa2e624353f5ed6211f3afc563f590b561cbf0" Namespace="calico-apiserver" Pod="calico-apiserver-76d787d64-blwxn" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-" Nov 8 00:35:33.710749 containerd[2015]: 2025-11-08 00:35:33.573 [INFO][5848] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="75914e329a3763f0ad317dd7cafa2e624353f5ed6211f3afc563f590b561cbf0" Namespace="calico-apiserver" Pod="calico-apiserver-76d787d64-blwxn" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-eth0" Nov 8 00:35:33.710749 containerd[2015]: 2025-11-08 00:35:33.632 [INFO][5860] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75914e329a3763f0ad317dd7cafa2e624353f5ed6211f3afc563f590b561cbf0" HandleID="k8s-pod-network.75914e329a3763f0ad317dd7cafa2e624353f5ed6211f3afc563f590b561cbf0" Workload="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-eth0" Nov 8 00:35:33.710749 containerd[2015]: 2025-11-08 00:35:33.633 [INFO][5860] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="75914e329a3763f0ad317dd7cafa2e624353f5ed6211f3afc563f590b561cbf0" HandleID="k8s-pod-network.75914e329a3763f0ad317dd7cafa2e624353f5ed6211f3afc563f590b561cbf0" Workload="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb5e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-248", "pod":"calico-apiserver-76d787d64-blwxn", "timestamp":"2025-11-08 00:35:33.632498618 +0000 UTC"}, Hostname:"ip-172-31-19-248", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:35:33.710749 containerd[2015]: 2025-11-08 00:35:33.633 [INFO][5860] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:33.710749 containerd[2015]: 2025-11-08 00:35:33.633 [INFO][5860] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:33.710749 containerd[2015]: 2025-11-08 00:35:33.633 [INFO][5860] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-248' Nov 8 00:35:33.710749 containerd[2015]: 2025-11-08 00:35:33.642 [INFO][5860] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.75914e329a3763f0ad317dd7cafa2e624353f5ed6211f3afc563f590b561cbf0" host="ip-172-31-19-248" Nov 8 00:35:33.710749 containerd[2015]: 2025-11-08 00:35:33.649 [INFO][5860] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-248" Nov 8 00:35:33.710749 containerd[2015]: 2025-11-08 00:35:33.654 [INFO][5860] ipam/ipam.go 511: Trying affinity for 192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:33.710749 containerd[2015]: 2025-11-08 00:35:33.658 [INFO][5860] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:33.710749 containerd[2015]: 2025-11-08 00:35:33.663 [INFO][5860] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.64/26 host="ip-172-31-19-248" Nov 8 00:35:33.710749 containerd[2015]: 2025-11-08 00:35:33.663 [INFO][5860] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.64/26 handle="k8s-pod-network.75914e329a3763f0ad317dd7cafa2e624353f5ed6211f3afc563f590b561cbf0" host="ip-172-31-19-248" Nov 8 00:35:33.710749 containerd[2015]: 2025-11-08 00:35:33.664 [INFO][5860] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.75914e329a3763f0ad317dd7cafa2e624353f5ed6211f3afc563f590b561cbf0 Nov 8 00:35:33.710749 containerd[2015]: 2025-11-08 00:35:33.670 [INFO][5860] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.64/26 handle="k8s-pod-network.75914e329a3763f0ad317dd7cafa2e624353f5ed6211f3afc563f590b561cbf0" host="ip-172-31-19-248" Nov 8 00:35:33.710749 containerd[2015]: 2025-11-08 00:35:33.680 [INFO][5860] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.73/26] block=192.168.7.64/26 handle="k8s-pod-network.75914e329a3763f0ad317dd7cafa2e624353f5ed6211f3afc563f590b561cbf0" host="ip-172-31-19-248" Nov 8 00:35:33.710749 containerd[2015]: 2025-11-08 00:35:33.680 [INFO][5860] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.73/26] handle="k8s-pod-network.75914e329a3763f0ad317dd7cafa2e624353f5ed6211f3afc563f590b561cbf0" host="ip-172-31-19-248" Nov 8 00:35:33.710749 containerd[2015]: 2025-11-08 00:35:33.680 [INFO][5860] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:33.710749 containerd[2015]: 2025-11-08 00:35:33.680 [INFO][5860] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.73/26] IPv6=[] ContainerID="75914e329a3763f0ad317dd7cafa2e624353f5ed6211f3afc563f590b561cbf0" HandleID="k8s-pod-network.75914e329a3763f0ad317dd7cafa2e624353f5ed6211f3afc563f590b561cbf0" Workload="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-eth0" Nov 8 00:35:33.713156 containerd[2015]: 2025-11-08 00:35:33.684 [INFO][5848] cni-plugin/k8s.go 418: Populated endpoint ContainerID="75914e329a3763f0ad317dd7cafa2e624353f5ed6211f3afc563f590b561cbf0" Namespace="calico-apiserver" Pod="calico-apiserver-76d787d64-blwxn" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-eth0", GenerateName:"calico-apiserver-76d787d64-", Namespace:"calico-apiserver", SelfLink:"", UID:"3efb6459-6c10-4cbc-8627-eed63b51acf1", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 34, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d787d64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"", Pod:"calico-apiserver-76d787d64-blwxn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali288e7f93a5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:33.713156 containerd[2015]: 2025-11-08 00:35:33.684 [INFO][5848] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.73/32] ContainerID="75914e329a3763f0ad317dd7cafa2e624353f5ed6211f3afc563f590b561cbf0" Namespace="calico-apiserver" Pod="calico-apiserver-76d787d64-blwxn" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-eth0" Nov 8 00:35:33.713156 containerd[2015]: 2025-11-08 00:35:33.684 [INFO][5848] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali288e7f93a5a ContainerID="75914e329a3763f0ad317dd7cafa2e624353f5ed6211f3afc563f590b561cbf0" Namespace="calico-apiserver" Pod="calico-apiserver-76d787d64-blwxn" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-eth0" Nov 8 00:35:33.713156 containerd[2015]: 2025-11-08 00:35:33.688 [INFO][5848] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="75914e329a3763f0ad317dd7cafa2e624353f5ed6211f3afc563f590b561cbf0" Namespace="calico-apiserver" Pod="calico-apiserver-76d787d64-blwxn" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-eth0" Nov 8 00:35:33.713156 containerd[2015]: 2025-11-08 00:35:33.690 [INFO][5848] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="75914e329a3763f0ad317dd7cafa2e624353f5ed6211f3afc563f590b561cbf0" Namespace="calico-apiserver" Pod="calico-apiserver-76d787d64-blwxn" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-eth0", GenerateName:"calico-apiserver-76d787d64-", Namespace:"calico-apiserver", SelfLink:"", UID:"3efb6459-6c10-4cbc-8627-eed63b51acf1", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 34, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d787d64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"75914e329a3763f0ad317dd7cafa2e624353f5ed6211f3afc563f590b561cbf0", Pod:"calico-apiserver-76d787d64-blwxn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali288e7f93a5a", MAC:"02:ea:b9:38:0c:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:33.713156 containerd[2015]: 2025-11-08 00:35:33.707 [INFO][5848] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="75914e329a3763f0ad317dd7cafa2e624353f5ed6211f3afc563f590b561cbf0" Namespace="calico-apiserver" Pod="calico-apiserver-76d787d64-blwxn" WorkloadEndpoint="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-eth0" Nov 8 00:35:33.746119 containerd[2015]: time="2025-11-08T00:35:33.742813021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:35:33.746380 containerd[2015]: time="2025-11-08T00:35:33.746330593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:35:33.749972 containerd[2015]: time="2025-11-08T00:35:33.748593777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:33.749972 containerd[2015]: time="2025-11-08T00:35:33.748828206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:35:33.902055 containerd[2015]: time="2025-11-08T00:35:33.902007971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d787d64-blwxn,Uid:3efb6459-6c10-4cbc-8627-eed63b51acf1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"75914e329a3763f0ad317dd7cafa2e624353f5ed6211f3afc563f590b561cbf0\"" Nov 8 00:35:33.918451 kubelet[3251]: E1108 00:35:33.918405 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kblsp" podUID="941e9e80-4862-4e00-88e0-89c895eac1a2" Nov 8 00:35:33.919730 kubelet[3251]: E1108 00:35:33.919541 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d787d64-qzb5q" podUID="a8220629-4d1c-4d1f-829d-7ab1eb924825" Nov 8 00:35:33.921820 kubelet[3251]: E1108 00:35:33.921766 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d6dbcdb77-gwjrt" podUID="95719bb6-d015-4d14-97fc-c6a4da2f553e" Nov 8 00:35:33.996568 containerd[2015]: time="2025-11-08T00:35:33.996506047Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:33.999845 containerd[2015]: time="2025-11-08T00:35:33.999336426Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:35:34.002669 containerd[2015]: time="2025-11-08T00:35:33.999459424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:35:34.002828 kubelet[3251]: E1108 00:35:34.000168 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:35:34.002828 kubelet[3251]: E1108 00:35:34.000218 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:35:34.002828 kubelet[3251]: E1108 00:35:34.000481 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6lvqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66479b5f68-gw447_calico-apiserver(f8a56697-875b-4b53-b7cd-550689e931a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:34.003336 kubelet[3251]: E1108 00:35:34.003303 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66479b5f68-gw447" podUID="f8a56697-875b-4b53-b7cd-550689e931a7" Nov 8 00:35:34.004273 containerd[2015]: time="2025-11-08T00:35:34.003699423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:35:34.015900 systemd-networkd[1573]: cali8785757fccd: Gained IPv6LL Nov 8 00:35:34.019933 systemd-networkd[1573]: calicf30e1d07d2: Gained IPv6LL Nov 8 00:35:34.302206 containerd[2015]: time="2025-11-08T00:35:34.302005768Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:34.304087 containerd[2015]: time="2025-11-08T00:35:34.304035231Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:35:34.304190 containerd[2015]: time="2025-11-08T00:35:34.304123704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:35:34.304359 kubelet[3251]: E1108 00:35:34.304311 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:35:34.304417 kubelet[3251]: E1108 00:35:34.304370 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:35:34.304611 kubelet[3251]: E1108 00:35:34.304559 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l858l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xqwhf_calico-system(deabdfcd-c211-4fd0-a621-ac2732629dc7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:34.305040 containerd[2015]: time="2025-11-08T00:35:34.305017151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:35:34.306454 kubelet[3251]: E1108 00:35:34.306376 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xqwhf" podUID="deabdfcd-c211-4fd0-a621-ac2732629dc7" Nov 8 00:35:34.583267 containerd[2015]: time="2025-11-08T00:35:34.583149458Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:34.585459 containerd[2015]: time="2025-11-08T00:35:34.585382655Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:35:34.585593 containerd[2015]: time="2025-11-08T00:35:34.585468157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:35:34.585844 kubelet[3251]: E1108 00:35:34.585772 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:35:34.585844 kubelet[3251]: E1108 00:35:34.585826 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:35:34.586006 kubelet[3251]: E1108 00:35:34.585958 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8v6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76d787d64-blwxn_calico-apiserver(3efb6459-6c10-4cbc-8627-eed63b51acf1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:34.587454 kubelet[3251]: E1108 00:35:34.587379 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d787d64-blwxn" podUID="3efb6459-6c10-4cbc-8627-eed63b51acf1" Nov 8 00:35:34.924352 kubelet[3251]: E1108 00:35:34.922770 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d787d64-blwxn" podUID="3efb6459-6c10-4cbc-8627-eed63b51acf1" Nov 8 00:35:34.931754 kubelet[3251]: E1108 00:35:34.927428 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66479b5f68-gw447" podUID="f8a56697-875b-4b53-b7cd-550689e931a7" Nov 8 00:35:34.931754 kubelet[3251]: E1108 00:35:34.928424 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xqwhf" podUID="deabdfcd-c211-4fd0-a621-ac2732629dc7" Nov 8 00:35:35.488311 systemd-networkd[1573]: cali288e7f93a5a: Gained IPv6LL Nov 8 00:35:38.306289 ntpd[1972]: Listen normally on 6 vxlan.calico 192.168.7.64:123 Nov 8 00:35:38.307661 ntpd[1972]: 8 Nov 00:35:38 ntpd[1972]: Listen normally on 6 vxlan.calico 192.168.7.64:123 Nov 8 00:35:38.307661 ntpd[1972]: 8 Nov 00:35:38 ntpd[1972]: Listen normally on 7 cali1024b45d84a [fe80::ecee:eeff:feee:eeee%4]:123 Nov 8 00:35:38.307661 ntpd[1972]: 8 Nov 00:35:38 ntpd[1972]: Listen normally on 8 vxlan.calico [fe80::6411:1aff:fe7e:60ba%5]:123 Nov 8 00:35:38.307661 ntpd[1972]: 8 Nov 00:35:38 ntpd[1972]: Listen normally on 9 calia53c6904941 [fe80::ecee:eeff:feee:eeee%6]:123 Nov 8 00:35:38.307661 ntpd[1972]: 8 Nov 00:35:38 ntpd[1972]: Listen normally on 10 cali46c9f73af32 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 8 00:35:38.307661 ntpd[1972]: 8 Nov 00:35:38 ntpd[1972]: Listen normally on 11 cali384330a73b0 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 8 00:35:38.307661 ntpd[1972]: 8 Nov 00:35:38 ntpd[1972]: Listen normally on 12 caliae1d16fcc30 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 8 00:35:38.307661 ntpd[1972]: 8 Nov 00:35:38 ntpd[1972]: Listen normally on 13 cali835e1439f02 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 8 00:35:38.307661 ntpd[1972]: 8 Nov 00:35:38 ntpd[1972]: Listen normally on 14 cali8785757fccd [fe80::ecee:eeff:feee:eeee%13]:123 Nov 8 00:35:38.307661 ntpd[1972]: 8 Nov 00:35:38 ntpd[1972]: Listen normally on 15 calicf30e1d07d2 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 8 00:35:38.307661 ntpd[1972]: 8 Nov 00:35:38 ntpd[1972]: Listen normally on 16 cali288e7f93a5a [fe80::ecee:eeff:feee:eeee%15]:123 Nov 8 00:35:38.306375 ntpd[1972]: Listen normally on 7 cali1024b45d84a [fe80::ecee:eeff:feee:eeee%4]:123 Nov 8 00:35:38.306419 ntpd[1972]: Listen normally on 8 vxlan.calico [fe80::6411:1aff:fe7e:60ba%5]:123 Nov 8 00:35:38.306448 ntpd[1972]: Listen normally on 9 calia53c6904941 [fe80::ecee:eeff:feee:eeee%6]:123 Nov 8 00:35:38.306476 ntpd[1972]: Listen normally on 10 cali46c9f73af32 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 8 00:35:38.306506 ntpd[1972]: Listen normally on 11 cali384330a73b0 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 8 00:35:38.306533 ntpd[1972]: Listen normally on 12 caliae1d16fcc30 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 8 00:35:38.306561 ntpd[1972]: Listen normally on 13 cali835e1439f02 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 8 00:35:38.306587 ntpd[1972]: Listen normally on 14 cali8785757fccd [fe80::ecee:eeff:feee:eeee%13]:123 Nov 8 00:35:38.306613 ntpd[1972]: Listen normally on 15 calicf30e1d07d2 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 8 00:35:38.306664 ntpd[1972]: Listen normally on 16 cali288e7f93a5a [fe80::ecee:eeff:feee:eeee%15]:123 Nov 8 00:35:38.417905 systemd[1]: Started sshd@9-172.31.19.248:22-139.178.89.65:40666.service - OpenSSH per-connection server daemon (139.178.89.65:40666). Nov 8 00:35:38.616719 sshd[5931]: Accepted publickey for core from 139.178.89.65 port 40666 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:35:38.624963 sshd[5931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:35:38.648740 systemd-logind[1990]: New session 10 of user core. Nov 8 00:35:38.651929 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:35:38.991777 sshd[5931]: pam_unix(sshd:session): session closed for user core Nov 8 00:35:39.002352 systemd[1]: sshd@9-172.31.19.248:22-139.178.89.65:40666.service: Deactivated successfully. Nov 8 00:35:39.009415 systemd-logind[1990]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:35:39.009826 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:35:39.017194 systemd-logind[1990]: Removed session 10. Nov 8 00:35:39.023994 systemd[1]: Started sshd@10-172.31.19.248:22-139.178.89.65:40678.service - OpenSSH per-connection server daemon (139.178.89.65:40678). Nov 8 00:35:39.183582 sshd[5948]: Accepted publickey for core from 139.178.89.65 port 40678 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:35:39.184248 sshd[5948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:35:39.189140 systemd-logind[1990]: New session 11 of user core. Nov 8 00:35:39.192952 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:35:39.480777 sshd[5948]: pam_unix(sshd:session): session closed for user core Nov 8 00:35:39.490137 systemd-logind[1990]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:35:39.490544 systemd[1]: sshd@10-172.31.19.248:22-139.178.89.65:40678.service: Deactivated successfully. Nov 8 00:35:39.519210 systemd[1]: Started sshd@11-172.31.19.248:22-139.178.89.65:40684.service - OpenSSH per-connection server daemon (139.178.89.65:40684). Nov 8 00:35:39.519641 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:35:39.522284 systemd-logind[1990]: Removed session 11. Nov 8 00:35:39.685995 sshd[5960]: Accepted publickey for core from 139.178.89.65 port 40684 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:35:39.687571 sshd[5960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:35:39.692609 systemd-logind[1990]: New session 12 of user core. Nov 8 00:35:39.695968 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:35:39.982261 sshd[5960]: pam_unix(sshd:session): session closed for user core Nov 8 00:35:39.986537 systemd[1]: sshd@11-172.31.19.248:22-139.178.89.65:40684.service: Deactivated successfully. Nov 8 00:35:39.993375 systemd-logind[1990]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:35:39.995531 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:35:39.997839 systemd-logind[1990]: Removed session 12. Nov 8 00:35:42.440442 containerd[2015]: time="2025-11-08T00:35:42.440110953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:35:42.467544 containerd[2015]: time="2025-11-08T00:35:42.467493079Z" level=info msg="StopPodSandbox for \"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9\"" Nov 8 00:35:42.564589 containerd[2015]: 2025-11-08 00:35:42.523 [WARNING][5990] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-eth0", GenerateName:"calico-apiserver-76d787d64-", Namespace:"calico-apiserver", SelfLink:"", UID:"3efb6459-6c10-4cbc-8627-eed63b51acf1", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 34, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d787d64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"75914e329a3763f0ad317dd7cafa2e624353f5ed6211f3afc563f590b561cbf0", Pod:"calico-apiserver-76d787d64-blwxn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali288e7f93a5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:42.564589 containerd[2015]: 2025-11-08 00:35:42.524 [INFO][5990] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" Nov 8 00:35:42.564589 containerd[2015]: 2025-11-08 00:35:42.524 [INFO][5990] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" iface="eth0" netns="" Nov 8 00:35:42.564589 containerd[2015]: 2025-11-08 00:35:42.524 [INFO][5990] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" Nov 8 00:35:42.564589 containerd[2015]: 2025-11-08 00:35:42.524 [INFO][5990] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" Nov 8 00:35:42.564589 containerd[2015]: 2025-11-08 00:35:42.550 [INFO][5997] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" HandleID="k8s-pod-network.8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" Workload="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-eth0" Nov 8 00:35:42.564589 containerd[2015]: 2025-11-08 00:35:42.550 [INFO][5997] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:42.564589 containerd[2015]: 2025-11-08 00:35:42.550 [INFO][5997] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:42.564589 containerd[2015]: 2025-11-08 00:35:42.557 [WARNING][5997] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" HandleID="k8s-pod-network.8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" Workload="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-eth0" Nov 8 00:35:42.564589 containerd[2015]: 2025-11-08 00:35:42.557 [INFO][5997] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" HandleID="k8s-pod-network.8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" Workload="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-eth0" Nov 8 00:35:42.564589 containerd[2015]: 2025-11-08 00:35:42.560 [INFO][5997] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:42.564589 containerd[2015]: 2025-11-08 00:35:42.562 [INFO][5990] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" Nov 8 00:35:42.565118 containerd[2015]: time="2025-11-08T00:35:42.564657876Z" level=info msg="TearDown network for sandbox \"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9\" successfully" Nov 8 00:35:42.565118 containerd[2015]: time="2025-11-08T00:35:42.564689913Z" level=info msg="StopPodSandbox for \"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9\" returns successfully" Nov 8 00:35:42.573976 containerd[2015]: time="2025-11-08T00:35:42.573920895Z" level=info msg="RemovePodSandbox for \"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9\"" Nov 8 00:35:42.573976 containerd[2015]: time="2025-11-08T00:35:42.573964163Z" level=info msg="Forcibly stopping sandbox \"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9\"" Nov 8 00:35:42.652061 containerd[2015]: 2025-11-08 00:35:42.611 [WARNING][6011] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-eth0", GenerateName:"calico-apiserver-76d787d64-", Namespace:"calico-apiserver", SelfLink:"", UID:"3efb6459-6c10-4cbc-8627-eed63b51acf1", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 34, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d787d64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"75914e329a3763f0ad317dd7cafa2e624353f5ed6211f3afc563f590b561cbf0", Pod:"calico-apiserver-76d787d64-blwxn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali288e7f93a5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:42.652061 containerd[2015]: 2025-11-08 00:35:42.611 [INFO][6011] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" Nov 8 00:35:42.652061 containerd[2015]: 2025-11-08 00:35:42.611 [INFO][6011] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" iface="eth0" netns="" Nov 8 00:35:42.652061 containerd[2015]: 2025-11-08 00:35:42.611 [INFO][6011] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" Nov 8 00:35:42.652061 containerd[2015]: 2025-11-08 00:35:42.611 [INFO][6011] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" Nov 8 00:35:42.652061 containerd[2015]: 2025-11-08 00:35:42.638 [INFO][6019] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" HandleID="k8s-pod-network.8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" Workload="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-eth0" Nov 8 00:35:42.652061 containerd[2015]: 2025-11-08 00:35:42.638 [INFO][6019] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:42.652061 containerd[2015]: 2025-11-08 00:35:42.638 [INFO][6019] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:42.652061 containerd[2015]: 2025-11-08 00:35:42.644 [WARNING][6019] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" HandleID="k8s-pod-network.8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" Workload="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-eth0" Nov 8 00:35:42.652061 containerd[2015]: 2025-11-08 00:35:42.644 [INFO][6019] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" HandleID="k8s-pod-network.8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" Workload="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--blwxn-eth0" Nov 8 00:35:42.652061 containerd[2015]: 2025-11-08 00:35:42.646 [INFO][6019] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:42.652061 containerd[2015]: 2025-11-08 00:35:42.648 [INFO][6011] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9" Nov 8 00:35:42.652061 containerd[2015]: time="2025-11-08T00:35:42.650525373Z" level=info msg="TearDown network for sandbox \"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9\" successfully" Nov 8 00:35:42.656243 containerd[2015]: time="2025-11-08T00:35:42.656155650Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:35:42.656382 containerd[2015]: time="2025-11-08T00:35:42.656255271Z" level=info msg="RemovePodSandbox \"8cf356acb05636188100de7b3846862c1682fc7e80dd451d7345b675e9e2c3e9\" returns successfully" Nov 8 00:35:42.656826 containerd[2015]: time="2025-11-08T00:35:42.656793196Z" level=info msg="StopPodSandbox for \"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8\"" Nov 8 00:35:42.724257 containerd[2015]: time="2025-11-08T00:35:42.723569183Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:42.727110 containerd[2015]: time="2025-11-08T00:35:42.727050614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:35:42.727245 containerd[2015]: time="2025-11-08T00:35:42.727140151Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:35:42.727407 kubelet[3251]: E1108 00:35:42.727366 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:35:42.728871 kubelet[3251]: E1108 00:35:42.727420 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:35:42.728871 kubelet[3251]: E1108 00:35:42.727555 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4404d727bb9a4ffdaf41a02b37a33d06,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bqcns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6765766d45-dksmq_calico-system(d2ca12f7-a78a-4b5d-ab7c-23346dac65ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:42.732445 containerd[2015]: time="2025-11-08T00:35:42.732354164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:35:42.736942 containerd[2015]: 2025-11-08 00:35:42.691 [WARNING][6033] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"77d5a3a3-e22d-4eb1-b462-92484d58d55c", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 34, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54", Pod:"coredns-668d6bf9bc-tmmhp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia53c6904941", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:42.736942 containerd[2015]: 2025-11-08 00:35:42.691 [INFO][6033] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" Nov 8 00:35:42.736942 containerd[2015]: 2025-11-08 00:35:42.691 [INFO][6033] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" iface="eth0" netns="" Nov 8 00:35:42.736942 containerd[2015]: 2025-11-08 00:35:42.691 [INFO][6033] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" Nov 8 00:35:42.736942 containerd[2015]: 2025-11-08 00:35:42.691 [INFO][6033] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" Nov 8 00:35:42.736942 containerd[2015]: 2025-11-08 00:35:42.716 [INFO][6040] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" HandleID="k8s-pod-network.c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" Workload="ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-eth0" Nov 8 00:35:42.736942 containerd[2015]: 2025-11-08 00:35:42.716 [INFO][6040] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:42.736942 containerd[2015]: 2025-11-08 00:35:42.716 [INFO][6040] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:42.736942 containerd[2015]: 2025-11-08 00:35:42.724 [WARNING][6040] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" HandleID="k8s-pod-network.c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" Workload="ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-eth0" Nov 8 00:35:42.736942 containerd[2015]: 2025-11-08 00:35:42.724 [INFO][6040] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" HandleID="k8s-pod-network.c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" Workload="ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-eth0" Nov 8 00:35:42.736942 containerd[2015]: 2025-11-08 00:35:42.728 [INFO][6040] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:42.736942 containerd[2015]: 2025-11-08 00:35:42.733 [INFO][6033] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" Nov 8 00:35:42.736942 containerd[2015]: time="2025-11-08T00:35:42.736750614Z" level=info msg="TearDown network for sandbox \"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8\" successfully" Nov 8 00:35:42.736942 containerd[2015]: time="2025-11-08T00:35:42.736778741Z" level=info msg="StopPodSandbox for \"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8\" returns successfully" Nov 8 00:35:42.740385 containerd[2015]: time="2025-11-08T00:35:42.737324826Z" level=info msg="RemovePodSandbox for \"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8\"" Nov 8 00:35:42.740385 containerd[2015]: time="2025-11-08T00:35:42.737360152Z" level=info msg="Forcibly stopping sandbox \"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8\"" Nov 8 00:35:42.820056 containerd[2015]: 2025-11-08 00:35:42.780 [WARNING][6055] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"77d5a3a3-e22d-4eb1-b462-92484d58d55c", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 34, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"e48c816abedc3b9a45b6904d2f91452068770db9a81e63dbcd4bb884a60c5e54", Pod:"coredns-668d6bf9bc-tmmhp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia53c6904941", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:42.820056 containerd[2015]: 2025-11-08 00:35:42.780 [INFO][6055] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" Nov 8 00:35:42.820056 containerd[2015]: 2025-11-08 00:35:42.780 [INFO][6055] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" iface="eth0" netns="" Nov 8 00:35:42.820056 containerd[2015]: 2025-11-08 00:35:42.780 [INFO][6055] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" Nov 8 00:35:42.820056 containerd[2015]: 2025-11-08 00:35:42.780 [INFO][6055] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" Nov 8 00:35:42.820056 containerd[2015]: 2025-11-08 00:35:42.807 [INFO][6062] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" HandleID="k8s-pod-network.c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" Workload="ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-eth0" Nov 8 00:35:42.820056 containerd[2015]: 2025-11-08 00:35:42.807 [INFO][6062] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:42.820056 containerd[2015]: 2025-11-08 00:35:42.807 [INFO][6062] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:42.820056 containerd[2015]: 2025-11-08 00:35:42.814 [WARNING][6062] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" HandleID="k8s-pod-network.c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" Workload="ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-eth0" Nov 8 00:35:42.820056 containerd[2015]: 2025-11-08 00:35:42.814 [INFO][6062] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" HandleID="k8s-pod-network.c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" Workload="ip--172--31--19--248-k8s-coredns--668d6bf9bc--tmmhp-eth0" Nov 8 00:35:42.820056 containerd[2015]: 2025-11-08 00:35:42.816 [INFO][6062] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:42.820056 containerd[2015]: 2025-11-08 00:35:42.818 [INFO][6055] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8" Nov 8 00:35:42.820775 containerd[2015]: time="2025-11-08T00:35:42.820101863Z" level=info msg="TearDown network for sandbox \"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8\" successfully" Nov 8 00:35:42.829292 containerd[2015]: time="2025-11-08T00:35:42.829126942Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:35:42.829292 containerd[2015]: time="2025-11-08T00:35:42.829197509Z" level=info msg="RemovePodSandbox \"c9e85e45cadf0fc05b47b4d2aa91af6d6897bd4d0158c5dadccb9a3896d951b8\" returns successfully" Nov 8 00:35:42.829765 containerd[2015]: time="2025-11-08T00:35:42.829716767Z" level=info msg="StopPodSandbox for \"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9\"" Nov 8 00:35:42.909391 containerd[2015]: 2025-11-08 00:35:42.866 [WARNING][6076] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-csi--node--driver--xqwhf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"deabdfcd-c211-4fd0-a621-ac2732629dc7", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 35, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"a3ffb9f33f64e2d83f03ca5c0ce0e8ae3f1051cf24b4b5a1af7d78ad36c78ef8", Pod:"csi-node-driver-xqwhf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.7.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicf30e1d07d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:42.909391 containerd[2015]: 2025-11-08 00:35:42.867 [INFO][6076] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" Nov 8 00:35:42.909391 containerd[2015]: 2025-11-08 00:35:42.867 [INFO][6076] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" iface="eth0" netns="" Nov 8 00:35:42.909391 containerd[2015]: 2025-11-08 00:35:42.867 [INFO][6076] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" Nov 8 00:35:42.909391 containerd[2015]: 2025-11-08 00:35:42.867 [INFO][6076] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" Nov 8 00:35:42.909391 containerd[2015]: 2025-11-08 00:35:42.893 [INFO][6083] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" HandleID="k8s-pod-network.b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" Workload="ip--172--31--19--248-k8s-csi--node--driver--xqwhf-eth0" Nov 8 00:35:42.909391 containerd[2015]: 2025-11-08 00:35:42.893 [INFO][6083] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:42.909391 containerd[2015]: 2025-11-08 00:35:42.894 [INFO][6083] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:42.909391 containerd[2015]: 2025-11-08 00:35:42.903 [WARNING][6083] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" HandleID="k8s-pod-network.b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" Workload="ip--172--31--19--248-k8s-csi--node--driver--xqwhf-eth0" Nov 8 00:35:42.909391 containerd[2015]: 2025-11-08 00:35:42.903 [INFO][6083] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" HandleID="k8s-pod-network.b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" Workload="ip--172--31--19--248-k8s-csi--node--driver--xqwhf-eth0" Nov 8 00:35:42.909391 containerd[2015]: 2025-11-08 00:35:42.905 [INFO][6083] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:42.909391 containerd[2015]: 2025-11-08 00:35:42.907 [INFO][6076] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" Nov 8 00:35:42.910452 containerd[2015]: time="2025-11-08T00:35:42.909541467Z" level=info msg="TearDown network for sandbox \"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9\" successfully" Nov 8 00:35:42.910452 containerd[2015]: time="2025-11-08T00:35:42.909573003Z" level=info msg="StopPodSandbox for \"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9\" returns successfully" Nov 8 00:35:42.910452 containerd[2015]: time="2025-11-08T00:35:42.910116084Z" level=info msg="RemovePodSandbox for \"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9\"" Nov 8 00:35:42.910452 containerd[2015]: time="2025-11-08T00:35:42.910150900Z" level=info msg="Forcibly stopping sandbox \"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9\"" Nov 8 00:35:43.028178 containerd[2015]: time="2025-11-08T00:35:43.028131833Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:43.031693 containerd[2015]: time="2025-11-08T00:35:43.031495892Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:35:43.031693 containerd[2015]: time="2025-11-08T00:35:43.031524354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:35:43.032937 kubelet[3251]: E1108 00:35:43.032273 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:35:43.032937 kubelet[3251]: E1108 00:35:43.032337 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:35:43.032937 kubelet[3251]: E1108 00:35:43.032477 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bqcns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6765766d45-dksmq_calico-system(d2ca12f7-a78a-4b5d-ab7c-23346dac65ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:43.034685 kubelet[3251]: E1108 00:35:43.034119 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6765766d45-dksmq" podUID="d2ca12f7-a78a-4b5d-ab7c-23346dac65ff" Nov 8 00:35:43.053899 containerd[2015]: 2025-11-08 00:35:43.007 [WARNING][6098] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-csi--node--driver--xqwhf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"deabdfcd-c211-4fd0-a621-ac2732629dc7", ResourceVersion:"1157", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 35, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"a3ffb9f33f64e2d83f03ca5c0ce0e8ae3f1051cf24b4b5a1af7d78ad36c78ef8", Pod:"csi-node-driver-xqwhf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.7.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicf30e1d07d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:43.053899 containerd[2015]: 2025-11-08 00:35:43.007 [INFO][6098] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" Nov 8 00:35:43.053899 containerd[2015]: 2025-11-08 00:35:43.007 [INFO][6098] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" iface="eth0" netns="" Nov 8 00:35:43.053899 containerd[2015]: 2025-11-08 00:35:43.007 [INFO][6098] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" Nov 8 00:35:43.053899 containerd[2015]: 2025-11-08 00:35:43.007 [INFO][6098] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" Nov 8 00:35:43.053899 containerd[2015]: 2025-11-08 00:35:43.035 [INFO][6109] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" HandleID="k8s-pod-network.b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" Workload="ip--172--31--19--248-k8s-csi--node--driver--xqwhf-eth0" Nov 8 00:35:43.053899 containerd[2015]: 2025-11-08 00:35:43.036 [INFO][6109] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:43.053899 containerd[2015]: 2025-11-08 00:35:43.036 [INFO][6109] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:43.053899 containerd[2015]: 2025-11-08 00:35:43.044 [WARNING][6109] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" HandleID="k8s-pod-network.b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" Workload="ip--172--31--19--248-k8s-csi--node--driver--xqwhf-eth0" Nov 8 00:35:43.053899 containerd[2015]: 2025-11-08 00:35:43.045 [INFO][6109] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" HandleID="k8s-pod-network.b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" Workload="ip--172--31--19--248-k8s-csi--node--driver--xqwhf-eth0" Nov 8 00:35:43.053899 containerd[2015]: 2025-11-08 00:35:43.047 [INFO][6109] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:43.053899 containerd[2015]: 2025-11-08 00:35:43.050 [INFO][6098] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9" Nov 8 00:35:43.056229 containerd[2015]: time="2025-11-08T00:35:43.056174447Z" level=info msg="TearDown network for sandbox \"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9\" successfully" Nov 8 00:35:43.062533 containerd[2015]: time="2025-11-08T00:35:43.062315141Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:35:43.062533 containerd[2015]: time="2025-11-08T00:35:43.062383428Z" level=info msg="RemovePodSandbox \"b3bf29cb38cc0767500e50d6c50f6eb05025f44d7e7904e0daf1745cae7dc5a9\" returns successfully" Nov 8 00:35:43.063746 containerd[2015]: time="2025-11-08T00:35:43.063585621Z" level=info msg="StopPodSandbox for \"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75\"" Nov 8 00:35:43.148613 containerd[2015]: 2025-11-08 00:35:43.112 [WARNING][6123] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-eth0", GenerateName:"calico-apiserver-66479b5f68-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8a56697-875b-4b53-b7cd-550689e931a7", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 34, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66479b5f68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"60a90f9ca595b7fdf443f7cb691f839f3b06008d6e16905ce33bcd8e985b8f3d", Pod:"calico-apiserver-66479b5f68-gw447", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8785757fccd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:43.148613 containerd[2015]: 2025-11-08 00:35:43.113 [INFO][6123] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" Nov 8 00:35:43.148613 containerd[2015]: 2025-11-08 00:35:43.113 [INFO][6123] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" iface="eth0" netns="" Nov 8 00:35:43.148613 containerd[2015]: 2025-11-08 00:35:43.113 [INFO][6123] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" Nov 8 00:35:43.148613 containerd[2015]: 2025-11-08 00:35:43.113 [INFO][6123] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" Nov 8 00:35:43.148613 containerd[2015]: 2025-11-08 00:35:43.136 [INFO][6130] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" HandleID="k8s-pod-network.39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" Workload="ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-eth0" Nov 8 00:35:43.148613 containerd[2015]: 2025-11-08 00:35:43.136 [INFO][6130] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:43.148613 containerd[2015]: 2025-11-08 00:35:43.136 [INFO][6130] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:43.148613 containerd[2015]: 2025-11-08 00:35:43.142 [WARNING][6130] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" HandleID="k8s-pod-network.39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" Workload="ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-eth0" Nov 8 00:35:43.148613 containerd[2015]: 2025-11-08 00:35:43.142 [INFO][6130] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" HandleID="k8s-pod-network.39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" Workload="ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-eth0" Nov 8 00:35:43.148613 containerd[2015]: 2025-11-08 00:35:43.144 [INFO][6130] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:43.148613 containerd[2015]: 2025-11-08 00:35:43.146 [INFO][6123] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" Nov 8 00:35:43.148613 containerd[2015]: time="2025-11-08T00:35:43.148487697Z" level=info msg="TearDown network for sandbox \"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75\" successfully" Nov 8 00:35:43.148613 containerd[2015]: time="2025-11-08T00:35:43.148512644Z" level=info msg="StopPodSandbox for \"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75\" returns successfully" Nov 8 00:35:43.149350 containerd[2015]: time="2025-11-08T00:35:43.149072071Z" level=info msg="RemovePodSandbox for \"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75\"" Nov 8 00:35:43.149350 containerd[2015]: time="2025-11-08T00:35:43.149108464Z" level=info msg="Forcibly stopping sandbox \"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75\"" Nov 8 00:35:43.242135 containerd[2015]: 2025-11-08 00:35:43.187 [WARNING][6144] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-eth0", GenerateName:"calico-apiserver-66479b5f68-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8a56697-875b-4b53-b7cd-550689e931a7", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 34, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66479b5f68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"60a90f9ca595b7fdf443f7cb691f839f3b06008d6e16905ce33bcd8e985b8f3d", Pod:"calico-apiserver-66479b5f68-gw447", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8785757fccd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:43.242135 containerd[2015]: 2025-11-08 00:35:43.188 [INFO][6144] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" Nov 8 00:35:43.242135 containerd[2015]: 2025-11-08 00:35:43.188 [INFO][6144] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" iface="eth0" netns="" Nov 8 00:35:43.242135 containerd[2015]: 2025-11-08 00:35:43.188 [INFO][6144] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" Nov 8 00:35:43.242135 containerd[2015]: 2025-11-08 00:35:43.188 [INFO][6144] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" Nov 8 00:35:43.242135 containerd[2015]: 2025-11-08 00:35:43.224 [INFO][6151] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" HandleID="k8s-pod-network.39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" Workload="ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-eth0" Nov 8 00:35:43.242135 containerd[2015]: 2025-11-08 00:35:43.224 [INFO][6151] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:43.242135 containerd[2015]: 2025-11-08 00:35:43.224 [INFO][6151] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:43.242135 containerd[2015]: 2025-11-08 00:35:43.234 [WARNING][6151] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" HandleID="k8s-pod-network.39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" Workload="ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-eth0" Nov 8 00:35:43.242135 containerd[2015]: 2025-11-08 00:35:43.234 [INFO][6151] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" HandleID="k8s-pod-network.39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" Workload="ip--172--31--19--248-k8s-calico--apiserver--66479b5f68--gw447-eth0" Nov 8 00:35:43.242135 containerd[2015]: 2025-11-08 00:35:43.236 [INFO][6151] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:43.242135 containerd[2015]: 2025-11-08 00:35:43.239 [INFO][6144] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75" Nov 8 00:35:43.242970 containerd[2015]: time="2025-11-08T00:35:43.242170224Z" level=info msg="TearDown network for sandbox \"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75\" successfully" Nov 8 00:35:43.249134 containerd[2015]: time="2025-11-08T00:35:43.249076680Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:35:43.249287 containerd[2015]: time="2025-11-08T00:35:43.249186164Z" level=info msg="RemovePodSandbox \"39ae1ae15251c3fe12bea0dd162da0d4f1c241fb1987c5f7c05978224b611d75\" returns successfully" Nov 8 00:35:43.250680 containerd[2015]: time="2025-11-08T00:35:43.249765422Z" level=info msg="StopPodSandbox for \"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce\"" Nov 8 00:35:43.354096 containerd[2015]: 2025-11-08 00:35:43.300 [WARNING][6165] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fa70c9d5-021d-43b0-b46a-b204a72a1b25", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 34, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de", Pod:"coredns-668d6bf9bc-wspq5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46c9f73af32", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:43.354096 containerd[2015]: 2025-11-08 00:35:43.300 [INFO][6165] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" Nov 8 00:35:43.354096 containerd[2015]: 2025-11-08 00:35:43.301 [INFO][6165] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" iface="eth0" netns="" Nov 8 00:35:43.354096 containerd[2015]: 2025-11-08 00:35:43.301 [INFO][6165] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" Nov 8 00:35:43.354096 containerd[2015]: 2025-11-08 00:35:43.301 [INFO][6165] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" Nov 8 00:35:43.354096 containerd[2015]: 2025-11-08 00:35:43.336 [INFO][6175] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" HandleID="k8s-pod-network.c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" Workload="ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-eth0" Nov 8 00:35:43.354096 containerd[2015]: 2025-11-08 00:35:43.336 [INFO][6175] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:43.354096 containerd[2015]: 2025-11-08 00:35:43.336 [INFO][6175] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:43.354096 containerd[2015]: 2025-11-08 00:35:43.344 [WARNING][6175] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" HandleID="k8s-pod-network.c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" Workload="ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-eth0" Nov 8 00:35:43.354096 containerd[2015]: 2025-11-08 00:35:43.344 [INFO][6175] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" HandleID="k8s-pod-network.c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" Workload="ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-eth0" Nov 8 00:35:43.354096 containerd[2015]: 2025-11-08 00:35:43.347 [INFO][6175] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:43.354096 containerd[2015]: 2025-11-08 00:35:43.350 [INFO][6165] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" Nov 8 00:35:43.354096 containerd[2015]: time="2025-11-08T00:35:43.353653688Z" level=info msg="TearDown network for sandbox \"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce\" successfully" Nov 8 00:35:43.354096 containerd[2015]: time="2025-11-08T00:35:43.353679799Z" level=info msg="StopPodSandbox for \"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce\" returns successfully" Nov 8 00:35:43.360075 containerd[2015]: time="2025-11-08T00:35:43.357089360Z" level=info msg="RemovePodSandbox for \"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce\"" Nov 8 00:35:43.360075 containerd[2015]: time="2025-11-08T00:35:43.357126948Z" level=info msg="Forcibly stopping sandbox \"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce\"" Nov 8 00:35:43.475915 containerd[2015]: 2025-11-08 00:35:43.422 [WARNING][6189] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fa70c9d5-021d-43b0-b46a-b204a72a1b25", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 34, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"8c7555a362e3520362981144605e5051e32d343d4b1cf9d85cbccf923d2c64de", Pod:"coredns-668d6bf9bc-wspq5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46c9f73af32", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:43.475915 containerd[2015]: 2025-11-08 00:35:43.422 [INFO][6189] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" Nov 8 00:35:43.475915 containerd[2015]: 2025-11-08 00:35:43.422 [INFO][6189] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" iface="eth0" netns="" Nov 8 00:35:43.475915 containerd[2015]: 2025-11-08 00:35:43.422 [INFO][6189] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" Nov 8 00:35:43.475915 containerd[2015]: 2025-11-08 00:35:43.422 [INFO][6189] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" Nov 8 00:35:43.475915 containerd[2015]: 2025-11-08 00:35:43.461 [INFO][6196] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" HandleID="k8s-pod-network.c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" Workload="ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-eth0" Nov 8 00:35:43.475915 containerd[2015]: 2025-11-08 00:35:43.461 [INFO][6196] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:43.475915 containerd[2015]: 2025-11-08 00:35:43.461 [INFO][6196] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:43.475915 containerd[2015]: 2025-11-08 00:35:43.469 [WARNING][6196] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" HandleID="k8s-pod-network.c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" Workload="ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-eth0" Nov 8 00:35:43.475915 containerd[2015]: 2025-11-08 00:35:43.469 [INFO][6196] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" HandleID="k8s-pod-network.c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" Workload="ip--172--31--19--248-k8s-coredns--668d6bf9bc--wspq5-eth0" Nov 8 00:35:43.475915 containerd[2015]: 2025-11-08 00:35:43.471 [INFO][6196] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:43.475915 containerd[2015]: 2025-11-08 00:35:43.473 [INFO][6189] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce" Nov 8 00:35:43.476995 containerd[2015]: time="2025-11-08T00:35:43.475963195Z" level=info msg="TearDown network for sandbox \"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce\" successfully" Nov 8 00:35:43.482094 containerd[2015]: time="2025-11-08T00:35:43.482033945Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:35:43.482225 containerd[2015]: time="2025-11-08T00:35:43.482110777Z" level=info msg="RemovePodSandbox \"c1973b71ed468acbb50061857866920bb483e2b7fc79311632f6d5ae54ba24ce\" returns successfully" Nov 8 00:35:43.482712 containerd[2015]: time="2025-11-08T00:35:43.482685617Z" level=info msg="StopPodSandbox for \"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda\"" Nov 8 00:35:43.574220 containerd[2015]: 2025-11-08 00:35:43.529 [WARNING][6211] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"941e9e80-4862-4e00-88e0-89c895eac1a2", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 35, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"caa2ccaffc381a7f3b6441a03daff087745d6fdefc07f0d25bc74c95d8a082c8", Pod:"goldmane-666569f655-kblsp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.7.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali835e1439f02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:43.574220 containerd[2015]: 2025-11-08 00:35:43.529 [INFO][6211] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" Nov 8 00:35:43.574220 containerd[2015]: 2025-11-08 00:35:43.529 [INFO][6211] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" iface="eth0" netns="" Nov 8 00:35:43.574220 containerd[2015]: 2025-11-08 00:35:43.529 [INFO][6211] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" Nov 8 00:35:43.574220 containerd[2015]: 2025-11-08 00:35:43.529 [INFO][6211] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" Nov 8 00:35:43.574220 containerd[2015]: 2025-11-08 00:35:43.561 [INFO][6219] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" HandleID="k8s-pod-network.575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" Workload="ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-eth0" Nov 8 00:35:43.574220 containerd[2015]: 2025-11-08 00:35:43.561 [INFO][6219] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:43.574220 containerd[2015]: 2025-11-08 00:35:43.561 [INFO][6219] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:43.574220 containerd[2015]: 2025-11-08 00:35:43.568 [WARNING][6219] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" HandleID="k8s-pod-network.575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" Workload="ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-eth0" Nov 8 00:35:43.574220 containerd[2015]: 2025-11-08 00:35:43.568 [INFO][6219] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" HandleID="k8s-pod-network.575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" Workload="ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-eth0" Nov 8 00:35:43.574220 containerd[2015]: 2025-11-08 00:35:43.569 [INFO][6219] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:43.574220 containerd[2015]: 2025-11-08 00:35:43.572 [INFO][6211] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" Nov 8 00:35:43.575100 containerd[2015]: time="2025-11-08T00:35:43.574305909Z" level=info msg="TearDown network for sandbox \"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda\" successfully" Nov 8 00:35:43.575100 containerd[2015]: time="2025-11-08T00:35:43.574338307Z" level=info msg="StopPodSandbox for \"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda\" returns successfully" Nov 8 00:35:43.575311 containerd[2015]: time="2025-11-08T00:35:43.575235457Z" level=info msg="RemovePodSandbox for \"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda\"" Nov 8 00:35:43.575376 containerd[2015]: time="2025-11-08T00:35:43.575274453Z" level=info msg="Forcibly stopping sandbox \"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda\"" Nov 8 00:35:43.657713 containerd[2015]: 2025-11-08 00:35:43.615 [WARNING][6233] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"941e9e80-4862-4e00-88e0-89c895eac1a2", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 35, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"caa2ccaffc381a7f3b6441a03daff087745d6fdefc07f0d25bc74c95d8a082c8", Pod:"goldmane-666569f655-kblsp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.7.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali835e1439f02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:43.657713 containerd[2015]: 2025-11-08 00:35:43.616 [INFO][6233] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" Nov 8 00:35:43.657713 containerd[2015]: 2025-11-08 00:35:43.616 [INFO][6233] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" iface="eth0" netns="" Nov 8 00:35:43.657713 containerd[2015]: 2025-11-08 00:35:43.616 [INFO][6233] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" Nov 8 00:35:43.657713 containerd[2015]: 2025-11-08 00:35:43.616 [INFO][6233] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" Nov 8 00:35:43.657713 containerd[2015]: 2025-11-08 00:35:43.643 [INFO][6240] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" HandleID="k8s-pod-network.575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" Workload="ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-eth0" Nov 8 00:35:43.657713 containerd[2015]: 2025-11-08 00:35:43.643 [INFO][6240] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:43.657713 containerd[2015]: 2025-11-08 00:35:43.644 [INFO][6240] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:43.657713 containerd[2015]: 2025-11-08 00:35:43.651 [WARNING][6240] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" HandleID="k8s-pod-network.575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" Workload="ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-eth0" Nov 8 00:35:43.657713 containerd[2015]: 2025-11-08 00:35:43.651 [INFO][6240] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" HandleID="k8s-pod-network.575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" Workload="ip--172--31--19--248-k8s-goldmane--666569f655--kblsp-eth0" Nov 8 00:35:43.657713 containerd[2015]: 2025-11-08 00:35:43.653 [INFO][6240] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:43.657713 containerd[2015]: 2025-11-08 00:35:43.655 [INFO][6233] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda" Nov 8 00:35:43.658408 containerd[2015]: time="2025-11-08T00:35:43.657717189Z" level=info msg="TearDown network for sandbox \"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda\" successfully" Nov 8 00:35:43.665253 containerd[2015]: time="2025-11-08T00:35:43.665200852Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:35:43.665253 containerd[2015]: time="2025-11-08T00:35:43.665271285Z" level=info msg="RemovePodSandbox \"575d2172d2504d099e0a0bec65d2266dcd5cd3d10a3687fdccb5c03f8dd12cda\" returns successfully" Nov 8 00:35:43.665829 containerd[2015]: time="2025-11-08T00:35:43.665802046Z" level=info msg="StopPodSandbox for \"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415\"" Nov 8 00:35:43.744897 containerd[2015]: 2025-11-08 00:35:43.704 [WARNING][6254] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-eth0", GenerateName:"calico-apiserver-76d787d64-", Namespace:"calico-apiserver", SelfLink:"", UID:"a8220629-4d1c-4d1f-829d-7ab1eb924825", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 34, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d787d64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"4acd159ea7cd59aff6bd3f776d26fcecc91cd1fb9559aae65a76e8ea8f2f7a5f", Pod:"calico-apiserver-76d787d64-qzb5q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae1d16fcc30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:43.744897 containerd[2015]: 2025-11-08 00:35:43.706 [INFO][6254] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" Nov 8 00:35:43.744897 containerd[2015]: 2025-11-08 00:35:43.706 [INFO][6254] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" iface="eth0" netns="" Nov 8 00:35:43.744897 containerd[2015]: 2025-11-08 00:35:43.706 [INFO][6254] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" Nov 8 00:35:43.744897 containerd[2015]: 2025-11-08 00:35:43.706 [INFO][6254] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" Nov 8 00:35:43.744897 containerd[2015]: 2025-11-08 00:35:43.732 [INFO][6261] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" HandleID="k8s-pod-network.20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" Workload="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-eth0" Nov 8 00:35:43.744897 containerd[2015]: 2025-11-08 00:35:43.732 [INFO][6261] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:43.744897 containerd[2015]: 2025-11-08 00:35:43.733 [INFO][6261] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:43.744897 containerd[2015]: 2025-11-08 00:35:43.739 [WARNING][6261] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" HandleID="k8s-pod-network.20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" Workload="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-eth0" Nov 8 00:35:43.744897 containerd[2015]: 2025-11-08 00:35:43.739 [INFO][6261] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" HandleID="k8s-pod-network.20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" Workload="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-eth0" Nov 8 00:35:43.744897 containerd[2015]: 2025-11-08 00:35:43.740 [INFO][6261] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:43.744897 containerd[2015]: 2025-11-08 00:35:43.742 [INFO][6254] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" Nov 8 00:35:43.745367 containerd[2015]: time="2025-11-08T00:35:43.744939933Z" level=info msg="TearDown network for sandbox \"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415\" successfully" Nov 8 00:35:43.745367 containerd[2015]: time="2025-11-08T00:35:43.744961461Z" level=info msg="StopPodSandbox for \"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415\" returns successfully" Nov 8 00:35:43.745458 containerd[2015]: time="2025-11-08T00:35:43.745434718Z" level=info msg="RemovePodSandbox for \"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415\"" Nov 8 00:35:43.745494 containerd[2015]: time="2025-11-08T00:35:43.745462011Z" level=info msg="Forcibly stopping sandbox \"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415\"" Nov 8 00:35:43.824100 containerd[2015]: 2025-11-08 00:35:43.786 [WARNING][6275] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-eth0", GenerateName:"calico-apiserver-76d787d64-", Namespace:"calico-apiserver", SelfLink:"", UID:"a8220629-4d1c-4d1f-829d-7ab1eb924825", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 34, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d787d64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"4acd159ea7cd59aff6bd3f776d26fcecc91cd1fb9559aae65a76e8ea8f2f7a5f", Pod:"calico-apiserver-76d787d64-qzb5q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae1d16fcc30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:43.824100 containerd[2015]: 2025-11-08 00:35:43.787 [INFO][6275] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" Nov 8 00:35:43.824100 containerd[2015]: 2025-11-08 00:35:43.787 [INFO][6275] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" iface="eth0" netns="" Nov 8 00:35:43.824100 containerd[2015]: 2025-11-08 00:35:43.787 [INFO][6275] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" Nov 8 00:35:43.824100 containerd[2015]: 2025-11-08 00:35:43.787 [INFO][6275] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" Nov 8 00:35:43.824100 containerd[2015]: 2025-11-08 00:35:43.811 [INFO][6283] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" HandleID="k8s-pod-network.20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" Workload="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-eth0" Nov 8 00:35:43.824100 containerd[2015]: 2025-11-08 00:35:43.811 [INFO][6283] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:43.824100 containerd[2015]: 2025-11-08 00:35:43.811 [INFO][6283] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:43.824100 containerd[2015]: 2025-11-08 00:35:43.818 [WARNING][6283] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" HandleID="k8s-pod-network.20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" Workload="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-eth0" Nov 8 00:35:43.824100 containerd[2015]: 2025-11-08 00:35:43.818 [INFO][6283] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" HandleID="k8s-pod-network.20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" Workload="ip--172--31--19--248-k8s-calico--apiserver--76d787d64--qzb5q-eth0" Nov 8 00:35:43.824100 containerd[2015]: 2025-11-08 00:35:43.820 [INFO][6283] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:43.824100 containerd[2015]: 2025-11-08 00:35:43.822 [INFO][6275] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415" Nov 8 00:35:43.824100 containerd[2015]: time="2025-11-08T00:35:43.824085253Z" level=info msg="TearDown network for sandbox \"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415\" successfully" Nov 8 00:35:43.830124 containerd[2015]: time="2025-11-08T00:35:43.830063303Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:35:43.830124 containerd[2015]: time="2025-11-08T00:35:43.830130832Z" level=info msg="RemovePodSandbox \"20544c99b5897876d6f333d3d1aeaa192c7f3c5f0e59f88c891a8a8300cc3415\" returns successfully" Nov 8 00:35:43.830668 containerd[2015]: time="2025-11-08T00:35:43.830617334Z" level=info msg="StopPodSandbox for \"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd\"" Nov 8 00:35:43.915416 containerd[2015]: 2025-11-08 00:35:43.867 [WARNING][6297] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" WorkloadEndpoint="ip--172--31--19--248-k8s-whisker--56499b9b97--nd2wd-eth0" Nov 8 00:35:43.915416 containerd[2015]: 2025-11-08 00:35:43.867 [INFO][6297] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" Nov 8 00:35:43.915416 containerd[2015]: 2025-11-08 00:35:43.867 [INFO][6297] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" iface="eth0" netns="" Nov 8 00:35:43.915416 containerd[2015]: 2025-11-08 00:35:43.867 [INFO][6297] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" Nov 8 00:35:43.915416 containerd[2015]: 2025-11-08 00:35:43.867 [INFO][6297] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" Nov 8 00:35:43.915416 containerd[2015]: 2025-11-08 00:35:43.901 [INFO][6304] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" HandleID="k8s-pod-network.7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" Workload="ip--172--31--19--248-k8s-whisker--56499b9b97--nd2wd-eth0" Nov 8 00:35:43.915416 containerd[2015]: 2025-11-08 00:35:43.901 [INFO][6304] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:43.915416 containerd[2015]: 2025-11-08 00:35:43.901 [INFO][6304] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:43.915416 containerd[2015]: 2025-11-08 00:35:43.908 [WARNING][6304] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" HandleID="k8s-pod-network.7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" Workload="ip--172--31--19--248-k8s-whisker--56499b9b97--nd2wd-eth0" Nov 8 00:35:43.915416 containerd[2015]: 2025-11-08 00:35:43.908 [INFO][6304] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" HandleID="k8s-pod-network.7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" Workload="ip--172--31--19--248-k8s-whisker--56499b9b97--nd2wd-eth0" Nov 8 00:35:43.915416 containerd[2015]: 2025-11-08 00:35:43.909 [INFO][6304] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:43.915416 containerd[2015]: 2025-11-08 00:35:43.911 [INFO][6297] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" Nov 8 00:35:43.915416 containerd[2015]: time="2025-11-08T00:35:43.913771150Z" level=info msg="TearDown network for sandbox \"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd\" successfully" Nov 8 00:35:43.915416 containerd[2015]: time="2025-11-08T00:35:43.913794332Z" level=info msg="StopPodSandbox for \"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd\" returns successfully" Nov 8 00:35:43.915416 containerd[2015]: time="2025-11-08T00:35:43.914491400Z" level=info msg="RemovePodSandbox for \"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd\"" Nov 8 00:35:43.915416 containerd[2015]: time="2025-11-08T00:35:43.914527315Z" level=info msg="Forcibly stopping sandbox \"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd\"" Nov 8 00:35:44.022415 containerd[2015]: 2025-11-08 00:35:43.954 [WARNING][6321] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" WorkloadEndpoint="ip--172--31--19--248-k8s-whisker--56499b9b97--nd2wd-eth0" Nov 8 00:35:44.022415 containerd[2015]: 2025-11-08 00:35:43.954 [INFO][6321] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" Nov 8 00:35:44.022415 containerd[2015]: 2025-11-08 00:35:43.954 [INFO][6321] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" iface="eth0" netns="" Nov 8 00:35:44.022415 containerd[2015]: 2025-11-08 00:35:43.954 [INFO][6321] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" Nov 8 00:35:44.022415 containerd[2015]: 2025-11-08 00:35:43.954 [INFO][6321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" Nov 8 00:35:44.022415 containerd[2015]: 2025-11-08 00:35:43.996 [INFO][6328] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" HandleID="k8s-pod-network.7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" Workload="ip--172--31--19--248-k8s-whisker--56499b9b97--nd2wd-eth0" Nov 8 00:35:44.022415 containerd[2015]: 2025-11-08 00:35:43.997 [INFO][6328] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:44.022415 containerd[2015]: 2025-11-08 00:35:43.998 [INFO][6328] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:44.022415 containerd[2015]: 2025-11-08 00:35:44.008 [WARNING][6328] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" HandleID="k8s-pod-network.7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" Workload="ip--172--31--19--248-k8s-whisker--56499b9b97--nd2wd-eth0" Nov 8 00:35:44.022415 containerd[2015]: 2025-11-08 00:35:44.008 [INFO][6328] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" HandleID="k8s-pod-network.7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" Workload="ip--172--31--19--248-k8s-whisker--56499b9b97--nd2wd-eth0" Nov 8 00:35:44.022415 containerd[2015]: 2025-11-08 00:35:44.014 [INFO][6328] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:44.022415 containerd[2015]: 2025-11-08 00:35:44.019 [INFO][6321] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd" Nov 8 00:35:44.023412 containerd[2015]: time="2025-11-08T00:35:44.022471247Z" level=info msg="TearDown network for sandbox \"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd\" successfully" Nov 8 00:35:44.028491 containerd[2015]: time="2025-11-08T00:35:44.028337804Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:35:44.028491 containerd[2015]: time="2025-11-08T00:35:44.028400620Z" level=info msg="RemovePodSandbox \"7fc488f8853526a4fcaeb5e9d26428c2c2ceb3d0df01502f3f1dca53ced7befd\" returns successfully" Nov 8 00:35:44.029395 containerd[2015]: time="2025-11-08T00:35:44.028890062Z" level=info msg="StopPodSandbox for \"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e\"" Nov 8 00:35:44.105841 containerd[2015]: 2025-11-08 00:35:44.067 [WARNING][6342] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-eth0", GenerateName:"calico-kube-controllers-6d6dbcdb77-", Namespace:"calico-system", SelfLink:"", UID:"95719bb6-d015-4d14-97fc-c6a4da2f553e", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 35, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d6dbcdb77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"42ae027b5ebe0fcf4d5de9e8ee8f644540ea11f1ead83ac41505b216768781d7", Pod:"calico-kube-controllers-6d6dbcdb77-gwjrt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.7.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali384330a73b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:44.105841 containerd[2015]: 2025-11-08 00:35:44.067 [INFO][6342] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" Nov 8 00:35:44.105841 containerd[2015]: 2025-11-08 00:35:44.068 [INFO][6342] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" iface="eth0" netns="" Nov 8 00:35:44.105841 containerd[2015]: 2025-11-08 00:35:44.068 [INFO][6342] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" Nov 8 00:35:44.105841 containerd[2015]: 2025-11-08 00:35:44.068 [INFO][6342] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" Nov 8 00:35:44.105841 containerd[2015]: 2025-11-08 00:35:44.092 [INFO][6349] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" HandleID="k8s-pod-network.daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" Workload="ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-eth0" Nov 8 00:35:44.105841 containerd[2015]: 2025-11-08 00:35:44.092 [INFO][6349] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:44.105841 containerd[2015]: 2025-11-08 00:35:44.092 [INFO][6349] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:44.105841 containerd[2015]: 2025-11-08 00:35:44.099 [WARNING][6349] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" HandleID="k8s-pod-network.daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" Workload="ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-eth0" Nov 8 00:35:44.105841 containerd[2015]: 2025-11-08 00:35:44.099 [INFO][6349] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" HandleID="k8s-pod-network.daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" Workload="ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-eth0" Nov 8 00:35:44.105841 containerd[2015]: 2025-11-08 00:35:44.101 [INFO][6349] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:44.105841 containerd[2015]: 2025-11-08 00:35:44.103 [INFO][6342] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" Nov 8 00:35:44.107037 containerd[2015]: time="2025-11-08T00:35:44.105865770Z" level=info msg="TearDown network for sandbox \"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e\" successfully" Nov 8 00:35:44.107037 containerd[2015]: time="2025-11-08T00:35:44.105892553Z" level=info msg="StopPodSandbox for \"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e\" returns successfully" Nov 8 00:35:44.107037 containerd[2015]: time="2025-11-08T00:35:44.106468622Z" level=info msg="RemovePodSandbox for \"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e\"" Nov 8 00:35:44.107037 containerd[2015]: time="2025-11-08T00:35:44.106501546Z" level=info msg="Forcibly stopping sandbox \"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e\"" Nov 8 00:35:44.181353 containerd[2015]: 2025-11-08 00:35:44.144 [WARNING][6364] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-eth0", GenerateName:"calico-kube-controllers-6d6dbcdb77-", Namespace:"calico-system", SelfLink:"", UID:"95719bb6-d015-4d14-97fc-c6a4da2f553e", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 35, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d6dbcdb77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-248", ContainerID:"42ae027b5ebe0fcf4d5de9e8ee8f644540ea11f1ead83ac41505b216768781d7", Pod:"calico-kube-controllers-6d6dbcdb77-gwjrt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.7.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali384330a73b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:35:44.181353 containerd[2015]: 2025-11-08 00:35:44.144 [INFO][6364] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" Nov 8 00:35:44.181353 containerd[2015]: 2025-11-08 00:35:44.144 [INFO][6364] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" iface="eth0" netns="" Nov 8 00:35:44.181353 containerd[2015]: 2025-11-08 00:35:44.144 [INFO][6364] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" Nov 8 00:35:44.181353 containerd[2015]: 2025-11-08 00:35:44.144 [INFO][6364] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" Nov 8 00:35:44.181353 containerd[2015]: 2025-11-08 00:35:44.168 [INFO][6371] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" HandleID="k8s-pod-network.daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" Workload="ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-eth0" Nov 8 00:35:44.181353 containerd[2015]: 2025-11-08 00:35:44.168 [INFO][6371] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:35:44.181353 containerd[2015]: 2025-11-08 00:35:44.168 [INFO][6371] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:35:44.181353 containerd[2015]: 2025-11-08 00:35:44.175 [WARNING][6371] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" HandleID="k8s-pod-network.daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" Workload="ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-eth0" Nov 8 00:35:44.181353 containerd[2015]: 2025-11-08 00:35:44.175 [INFO][6371] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" HandleID="k8s-pod-network.daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" Workload="ip--172--31--19--248-k8s-calico--kube--controllers--6d6dbcdb77--gwjrt-eth0" Nov 8 00:35:44.181353 containerd[2015]: 2025-11-08 00:35:44.177 [INFO][6371] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:35:44.181353 containerd[2015]: 2025-11-08 00:35:44.179 [INFO][6364] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e" Nov 8 00:35:44.181353 containerd[2015]: time="2025-11-08T00:35:44.181303653Z" level=info msg="TearDown network for sandbox \"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e\" successfully" Nov 8 00:35:44.189539 containerd[2015]: time="2025-11-08T00:35:44.189454695Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:35:44.189539 containerd[2015]: time="2025-11-08T00:35:44.189531191Z" level=info msg="RemovePodSandbox \"daab287c15d00e97d16a5137a2027b5908e889eebafceadf13160fe19d4f3c6e\" returns successfully" Nov 8 00:35:45.016048 systemd[1]: Started sshd@12-172.31.19.248:22-139.178.89.65:40700.service - OpenSSH per-connection server daemon (139.178.89.65:40700). Nov 8 00:35:45.214686 sshd[6377]: Accepted publickey for core from 139.178.89.65 port 40700 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:35:45.216990 sshd[6377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:35:45.226715 systemd-logind[1990]: New session 13 of user core. Nov 8 00:35:45.233115 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:35:45.399554 containerd[2015]: time="2025-11-08T00:35:45.398235898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:35:45.525190 sshd[6377]: pam_unix(sshd:session): session closed for user core Nov 8 00:35:45.528678 systemd[1]: sshd@12-172.31.19.248:22-139.178.89.65:40700.service: Deactivated successfully. Nov 8 00:35:45.533035 systemd-logind[1990]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:35:45.533997 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:35:45.535544 systemd-logind[1990]: Removed session 13. Nov 8 00:35:45.682060 containerd[2015]: time="2025-11-08T00:35:45.681928424Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:45.684052 containerd[2015]: time="2025-11-08T00:35:45.683934360Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:35:45.684052 containerd[2015]: time="2025-11-08T00:35:45.684000439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:35:45.684220 kubelet[3251]: E1108 00:35:45.684141 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:35:45.684220 kubelet[3251]: E1108 00:35:45.684190 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:35:45.684600 kubelet[3251]: E1108 00:35:45.684397 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6lvqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66479b5f68-gw447_calico-apiserver(f8a56697-875b-4b53-b7cd-550689e931a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:45.685920 kubelet[3251]: E1108 00:35:45.685484 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66479b5f68-gw447" podUID="f8a56697-875b-4b53-b7cd-550689e931a7" Nov 8 00:35:45.686107 containerd[2015]: time="2025-11-08T00:35:45.685544680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:35:45.973273 containerd[2015]: time="2025-11-08T00:35:45.973139987Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:45.975350 containerd[2015]: time="2025-11-08T00:35:45.975296349Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:35:45.975547 containerd[2015]: time="2025-11-08T00:35:45.975325215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:35:45.975584 kubelet[3251]: E1108 00:35:45.975515 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:35:45.975584 kubelet[3251]: E1108 00:35:45.975558 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:35:45.975729 kubelet[3251]: E1108 00:35:45.975688 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8h6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76d787d64-qzb5q_calico-apiserver(a8220629-4d1c-4d1f-829d-7ab1eb924825): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:45.977244 kubelet[3251]: E1108 00:35:45.977212 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d787d64-qzb5q" podUID="a8220629-4d1c-4d1f-829d-7ab1eb924825" Nov 8 00:35:47.398212 containerd[2015]: time="2025-11-08T00:35:47.397700261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:35:47.665929 containerd[2015]: time="2025-11-08T00:35:47.665784824Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:47.668022 containerd[2015]: time="2025-11-08T00:35:47.667972609Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:35:47.668201 containerd[2015]: time="2025-11-08T00:35:47.668003837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:35:47.668402 kubelet[3251]: E1108 00:35:47.668350 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:35:47.669018 kubelet[3251]: E1108 00:35:47.668412 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:35:47.669018 kubelet[3251]: E1108 00:35:47.668656 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7nz66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d6dbcdb77-gwjrt_calico-system(95719bb6-d015-4d14-97fc-c6a4da2f553e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:47.669587 containerd[2015]: time="2025-11-08T00:35:47.669542553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:35:47.669894 kubelet[3251]: E1108 00:35:47.669768 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d6dbcdb77-gwjrt" podUID="95719bb6-d015-4d14-97fc-c6a4da2f553e" Nov 8 00:35:47.953221 containerd[2015]: time="2025-11-08T00:35:47.953098725Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:47.955504 containerd[2015]: time="2025-11-08T00:35:47.955441899Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:35:47.955680 containerd[2015]: time="2025-11-08T00:35:47.955539562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:35:47.955939 kubelet[3251]: E1108 00:35:47.955878 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:35:47.956092 kubelet[3251]: E1108 00:35:47.955944 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:35:47.956197 kubelet[3251]: E1108 00:35:47.956096 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l858l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xqwhf_calico-system(deabdfcd-c211-4fd0-a621-ac2732629dc7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:47.960170 containerd[2015]: time="2025-11-08T00:35:47.960135744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:35:48.236180 containerd[2015]: time="2025-11-08T00:35:48.236023450Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:48.238088 containerd[2015]: time="2025-11-08T00:35:48.238039742Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:35:48.238219 containerd[2015]: time="2025-11-08T00:35:48.238093914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:35:48.238451 kubelet[3251]: E1108 00:35:48.238383 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:35:48.238451 kubelet[3251]: E1108 00:35:48.238434 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:35:48.238789 kubelet[3251]: E1108 00:35:48.238700 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l858l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xqwhf_calico-system(deabdfcd-c211-4fd0-a621-ac2732629dc7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:48.240025 kubelet[3251]: E1108 00:35:48.239986 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xqwhf" podUID="deabdfcd-c211-4fd0-a621-ac2732629dc7" Nov 8 00:35:48.398473 containerd[2015]: time="2025-11-08T00:35:48.397714043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:35:48.668144 containerd[2015]: time="2025-11-08T00:35:48.668089070Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:48.671294 containerd[2015]: time="2025-11-08T00:35:48.671242589Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:35:48.672144 containerd[2015]: time="2025-11-08T00:35:48.671267486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:35:48.672187 kubelet[3251]: E1108 00:35:48.671486 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:35:48.672187 kubelet[3251]: E1108 00:35:48.671528 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:35:48.672187 kubelet[3251]: E1108 00:35:48.671694 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8v6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76d787d64-blwxn_calico-apiserver(3efb6459-6c10-4cbc-8627-eed63b51acf1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:48.672913 kubelet[3251]: E1108 00:35:48.672869 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d787d64-blwxn" podUID="3efb6459-6c10-4cbc-8627-eed63b51acf1" Nov 8 00:35:49.399516 containerd[2015]: time="2025-11-08T00:35:49.399096384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:35:49.704549 containerd[2015]: time="2025-11-08T00:35:49.704406886Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:49.706890 containerd[2015]: time="2025-11-08T00:35:49.706747437Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:35:49.707213 containerd[2015]: time="2025-11-08T00:35:49.706769234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:35:49.707276 kubelet[3251]: E1108 00:35:49.707173 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:35:49.707276 kubelet[3251]: E1108 00:35:49.707224 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:35:49.707832 kubelet[3251]: E1108 00:35:49.707397 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dcgbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kblsp_calico-system(941e9e80-4862-4e00-88e0-89c895eac1a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:49.708829 kubelet[3251]: E1108 00:35:49.708773 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kblsp" podUID="941e9e80-4862-4e00-88e0-89c895eac1a2" Nov 8 00:35:50.552353 systemd[1]: Started sshd@13-172.31.19.248:22-139.178.89.65:58342.service - OpenSSH per-connection server daemon (139.178.89.65:58342). Nov 8 00:35:50.724506 sshd[6399]: Accepted publickey for core from 139.178.89.65 port 58342 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:35:50.726053 sshd[6399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:35:50.731846 systemd-logind[1990]: New session 14 of user core. Nov 8 00:35:50.742001 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:35:50.940589 sshd[6399]: pam_unix(sshd:session): session closed for user core Nov 8 00:35:50.943735 systemd[1]: sshd@13-172.31.19.248:22-139.178.89.65:58342.service: Deactivated successfully. Nov 8 00:35:50.947822 systemd-logind[1990]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:35:50.948551 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:35:50.950492 systemd-logind[1990]: Removed session 14. Nov 8 00:35:55.970132 systemd[1]: Started sshd@14-172.31.19.248:22-139.178.89.65:58356.service - OpenSSH per-connection server daemon (139.178.89.65:58356). Nov 8 00:35:56.136244 sshd[6415]: Accepted publickey for core from 139.178.89.65 port 58356 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:35:56.137781 sshd[6415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:35:56.142576 systemd-logind[1990]: New session 15 of user core. Nov 8 00:35:56.147948 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:35:56.403540 kubelet[3251]: E1108 00:35:56.403465 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6765766d45-dksmq" podUID="d2ca12f7-a78a-4b5d-ab7c-23346dac65ff" Nov 8 00:35:56.407874 sshd[6415]: pam_unix(sshd:session): session closed for user core Nov 8 00:35:56.415491 systemd-logind[1990]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:35:56.416700 systemd[1]: sshd@14-172.31.19.248:22-139.178.89.65:58356.service: Deactivated successfully. Nov 8 00:35:56.421888 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:35:56.426872 systemd-logind[1990]: Removed session 15. Nov 8 00:35:57.400270 kubelet[3251]: E1108 00:35:57.399046 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66479b5f68-gw447" podUID="f8a56697-875b-4b53-b7cd-550689e931a7" Nov 8 00:36:00.400475 kubelet[3251]: E1108 00:36:00.400434 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d787d64-blwxn" podUID="3efb6459-6c10-4cbc-8627-eed63b51acf1" Nov 8 00:36:01.461071 kubelet[3251]: E1108 00:36:01.455637 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d6dbcdb77-gwjrt" podUID="95719bb6-d015-4d14-97fc-c6a4da2f553e" Nov 8 00:36:01.461071 kubelet[3251]: E1108 00:36:01.460104 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d787d64-qzb5q" podUID="a8220629-4d1c-4d1f-829d-7ab1eb924825" Nov 8 00:36:01.463540 kubelet[3251]: E1108 00:36:01.463102 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xqwhf" podUID="deabdfcd-c211-4fd0-a621-ac2732629dc7" Nov 8 00:36:01.463540 kubelet[3251]: E1108 00:36:01.463464 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kblsp" podUID="941e9e80-4862-4e00-88e0-89c895eac1a2" Nov 8 00:36:01.473784 systemd[1]: Started sshd@15-172.31.19.248:22-139.178.89.65:41630.service - OpenSSH per-connection server daemon (139.178.89.65:41630). Nov 8 00:36:01.712653 sshd[6451]: Accepted publickey for core from 139.178.89.65 port 41630 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:36:01.760682 sshd[6451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:01.802756 systemd-logind[1990]: New session 16 of user core. Nov 8 00:36:01.821755 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:36:02.729420 sshd[6451]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:02.738186 systemd[1]: sshd@15-172.31.19.248:22-139.178.89.65:41630.service: Deactivated successfully. Nov 8 00:36:02.744341 systemd-logind[1990]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:36:02.745499 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:36:02.757520 systemd[1]: Started sshd@16-172.31.19.248:22-139.178.89.65:41632.service - OpenSSH per-connection server daemon (139.178.89.65:41632). Nov 8 00:36:02.758554 systemd-logind[1990]: Removed session 16. Nov 8 00:36:02.938336 sshd[6465]: Accepted publickey for core from 139.178.89.65 port 41632 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:36:02.940935 sshd[6465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:02.948008 systemd-logind[1990]: New session 17 of user core. Nov 8 00:36:02.954135 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:36:03.392039 systemd-resolved[1916]: Under memory pressure, flushing caches. Nov 8 00:36:03.393991 systemd-journald[1503]: Under memory pressure, flushing caches. Nov 8 00:36:03.392076 systemd-resolved[1916]: Flushed all caches. Nov 8 00:36:03.610095 sshd[6465]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:03.614606 systemd[1]: sshd@16-172.31.19.248:22-139.178.89.65:41632.service: Deactivated successfully. Nov 8 00:36:03.619488 systemd-logind[1990]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:36:03.620503 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:36:03.622807 systemd-logind[1990]: Removed session 17. Nov 8 00:36:03.640041 systemd[1]: Started sshd@17-172.31.19.248:22-139.178.89.65:41640.service - OpenSSH per-connection server daemon (139.178.89.65:41640). Nov 8 00:36:03.810303 sshd[6477]: Accepted publickey for core from 139.178.89.65 port 41640 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:36:03.812455 sshd[6477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:03.817682 systemd-logind[1990]: New session 18 of user core. Nov 8 00:36:03.823193 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:36:04.703024 sshd[6477]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:04.714140 systemd-logind[1990]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:36:04.714153 systemd[1]: sshd@17-172.31.19.248:22-139.178.89.65:41640.service: Deactivated successfully. Nov 8 00:36:04.719348 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:36:04.721673 systemd-logind[1990]: Removed session 18. Nov 8 00:36:04.733227 systemd[1]: Started sshd@18-172.31.19.248:22-139.178.89.65:41642.service - OpenSSH per-connection server daemon (139.178.89.65:41642). Nov 8 00:36:04.914804 sshd[6496]: Accepted publickey for core from 139.178.89.65 port 41642 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:36:04.916902 sshd[6496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:04.932938 systemd-logind[1990]: New session 19 of user core. Nov 8 00:36:04.939226 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:36:05.439814 systemd-resolved[1916]: Under memory pressure, flushing caches. Nov 8 00:36:05.441899 systemd-journald[1503]: Under memory pressure, flushing caches. Nov 8 00:36:05.439838 systemd-resolved[1916]: Flushed all caches. Nov 8 00:36:05.875931 sshd[6496]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:05.886197 systemd[1]: sshd@18-172.31.19.248:22-139.178.89.65:41642.service: Deactivated successfully. Nov 8 00:36:05.902188 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:36:05.904028 systemd-logind[1990]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:36:05.921993 systemd[1]: Started sshd@19-172.31.19.248:22-139.178.89.65:41658.service - OpenSSH per-connection server daemon (139.178.89.65:41658). Nov 8 00:36:05.924756 systemd-logind[1990]: Removed session 19. Nov 8 00:36:06.088477 sshd[6508]: Accepted publickey for core from 139.178.89.65 port 41658 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:36:06.088396 sshd[6508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:06.103745 systemd-logind[1990]: New session 20 of user core. Nov 8 00:36:06.105946 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:36:06.474929 sshd[6508]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:06.480833 systemd[1]: sshd@19-172.31.19.248:22-139.178.89.65:41658.service: Deactivated successfully. Nov 8 00:36:06.481161 systemd-logind[1990]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:36:06.489054 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:36:06.494422 systemd-logind[1990]: Removed session 20. Nov 8 00:36:07.487755 systemd-resolved[1916]: Under memory pressure, flushing caches. Nov 8 00:36:07.487764 systemd-resolved[1916]: Flushed all caches. Nov 8 00:36:07.489663 systemd-journald[1503]: Under memory pressure, flushing caches. Nov 8 00:36:10.398054 containerd[2015]: time="2025-11-08T00:36:10.397873710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:36:10.694573 containerd[2015]: time="2025-11-08T00:36:10.694291713Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:36:10.696586 containerd[2015]: time="2025-11-08T00:36:10.696450711Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:36:10.696750 containerd[2015]: time="2025-11-08T00:36:10.696680304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:36:10.696947 kubelet[3251]: E1108 00:36:10.696882 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:36:10.697373 kubelet[3251]: E1108 00:36:10.696959 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:36:10.697829 kubelet[3251]: E1108 00:36:10.697764 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6lvqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66479b5f68-gw447_calico-apiserver(f8a56697-875b-4b53-b7cd-550689e931a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:36:10.699068 kubelet[3251]: E1108 00:36:10.699004 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66479b5f68-gw447" podUID="f8a56697-875b-4b53-b7cd-550689e931a7" Nov 8 00:36:11.396866 containerd[2015]: time="2025-11-08T00:36:11.396825813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:36:11.504243 systemd[1]: Started sshd@20-172.31.19.248:22-139.178.89.65:54698.service - OpenSSH per-connection server daemon (139.178.89.65:54698). Nov 8 00:36:11.679602 sshd[6530]: Accepted publickey for core from 139.178.89.65 port 54698 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:36:11.682554 sshd[6530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:11.688265 systemd-logind[1990]: New session 21 of user core. Nov 8 00:36:11.692905 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:36:11.706152 containerd[2015]: time="2025-11-08T00:36:11.706103084Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:36:11.708528 containerd[2015]: time="2025-11-08T00:36:11.708437688Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:36:11.708685 containerd[2015]: time="2025-11-08T00:36:11.708534484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:36:11.708746 kubelet[3251]: E1108 00:36:11.708709 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:36:11.709284 kubelet[3251]: E1108 00:36:11.708755 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:36:11.709284 kubelet[3251]: E1108 00:36:11.708872 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4404d727bb9a4ffdaf41a02b37a33d06,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bqcns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6765766d45-dksmq_calico-system(d2ca12f7-a78a-4b5d-ab7c-23346dac65ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:36:11.712294 containerd[2015]: time="2025-11-08T00:36:11.712231479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:36:11.970264 sshd[6530]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:11.974003 systemd[1]: sshd@20-172.31.19.248:22-139.178.89.65:54698.service: Deactivated successfully. Nov 8 00:36:11.976950 systemd-logind[1990]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:36:11.978567 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:36:11.979934 systemd-logind[1990]: Removed session 21. Nov 8 00:36:11.994219 containerd[2015]: time="2025-11-08T00:36:11.994078954Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:36:11.996278 containerd[2015]: time="2025-11-08T00:36:11.996220832Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:36:11.996414 containerd[2015]: time="2025-11-08T00:36:11.996314085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:36:11.996548 kubelet[3251]: E1108 00:36:11.996497 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:36:11.996645 kubelet[3251]: E1108 00:36:11.996558 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:36:11.996812 kubelet[3251]: E1108 00:36:11.996727 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bqcns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6765766d45-dksmq_calico-system(d2ca12f7-a78a-4b5d-ab7c-23346dac65ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:36:11.998186 kubelet[3251]: E1108 00:36:11.998139 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6765766d45-dksmq" podUID="d2ca12f7-a78a-4b5d-ab7c-23346dac65ff" Nov 8 00:36:13.407975 containerd[2015]: time="2025-11-08T00:36:13.407924562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:36:13.687329 containerd[2015]: time="2025-11-08T00:36:13.683055769Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:36:13.687329 containerd[2015]: time="2025-11-08T00:36:13.685146225Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:36:13.687329 containerd[2015]: time="2025-11-08T00:36:13.685230013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:36:13.687749 kubelet[3251]: E1108 00:36:13.685422 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:36:13.687749 kubelet[3251]: E1108 00:36:13.685479 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:36:13.687749 kubelet[3251]: E1108 00:36:13.685594 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8h6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76d787d64-qzb5q_calico-apiserver(a8220629-4d1c-4d1f-829d-7ab1eb924825): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:36:13.687749 kubelet[3251]: E1108 00:36:13.687276 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d787d64-qzb5q" podUID="a8220629-4d1c-4d1f-829d-7ab1eb924825" Nov 8 00:36:15.397166 containerd[2015]: time="2025-11-08T00:36:15.396903644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:36:15.705518 containerd[2015]: time="2025-11-08T00:36:15.705376097Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:36:15.707594 containerd[2015]: time="2025-11-08T00:36:15.707476663Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:36:15.707594 containerd[2015]: time="2025-11-08T00:36:15.707541953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:36:15.707858 kubelet[3251]: E1108 00:36:15.707819 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:36:15.708207 kubelet[3251]: E1108 00:36:15.707867 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:36:15.708207 kubelet[3251]: E1108 00:36:15.708063 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8v6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76d787d64-blwxn_calico-apiserver(3efb6459-6c10-4cbc-8627-eed63b51acf1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:36:15.708654 containerd[2015]: time="2025-11-08T00:36:15.708608749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:36:15.710243 kubelet[3251]: E1108 00:36:15.710179 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d787d64-blwxn" podUID="3efb6459-6c10-4cbc-8627-eed63b51acf1" Nov 8 00:36:15.990734 containerd[2015]: time="2025-11-08T00:36:15.986404250Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:36:15.992704 containerd[2015]: time="2025-11-08T00:36:15.992647665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:36:15.992836 containerd[2015]: time="2025-11-08T00:36:15.992738297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:36:15.992952 kubelet[3251]: E1108 00:36:15.992904 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:36:15.993084 kubelet[3251]: E1108 00:36:15.992963 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:36:15.993195 kubelet[3251]: E1108 00:36:15.993139 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dcgbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kblsp_calico-system(941e9e80-4862-4e00-88e0-89c895eac1a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:36:15.994828 kubelet[3251]: E1108 00:36:15.994768 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kblsp" podUID="941e9e80-4862-4e00-88e0-89c895eac1a2" Nov 8 00:36:16.398866 containerd[2015]: time="2025-11-08T00:36:16.398766550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:36:16.687320 containerd[2015]: time="2025-11-08T00:36:16.687076814Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:36:16.689452 containerd[2015]: time="2025-11-08T00:36:16.689387032Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:36:16.689666 containerd[2015]: time="2025-11-08T00:36:16.689412130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:36:16.689728 kubelet[3251]: E1108 00:36:16.689653 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:36:16.689728 kubelet[3251]: E1108 00:36:16.689710 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:36:16.691381 kubelet[3251]: E1108 00:36:16.690008 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7nz66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d6dbcdb77-gwjrt_calico-system(95719bb6-d015-4d14-97fc-c6a4da2f553e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:36:16.691381 kubelet[3251]: E1108 00:36:16.691233 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d6dbcdb77-gwjrt" podUID="95719bb6-d015-4d14-97fc-c6a4da2f553e" Nov 8 00:36:16.692154 containerd[2015]: time="2025-11-08T00:36:16.690318363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:36:16.954286 containerd[2015]: time="2025-11-08T00:36:16.954156839Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:36:16.956412 containerd[2015]: time="2025-11-08T00:36:16.956343907Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:36:16.956565 containerd[2015]: time="2025-11-08T00:36:16.956444419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:36:16.956654 kubelet[3251]: E1108 00:36:16.956592 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:36:16.957096 kubelet[3251]: E1108 00:36:16.956654 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:36:16.957096 kubelet[3251]: E1108 00:36:16.956775 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l858l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xqwhf_calico-system(deabdfcd-c211-4fd0-a621-ac2732629dc7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:36:16.973369 containerd[2015]: time="2025-11-08T00:36:16.973316492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:36:17.000015 systemd[1]: Started sshd@21-172.31.19.248:22-139.178.89.65:47222.service - OpenSSH per-connection server daemon (139.178.89.65:47222). Nov 8 00:36:17.181803 sshd[6549]: Accepted publickey for core from 139.178.89.65 port 47222 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:36:17.184794 sshd[6549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:17.194315 systemd-logind[1990]: New session 22 of user core. Nov 8 00:36:17.202921 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:36:17.272884 containerd[2015]: time="2025-11-08T00:36:17.272564529Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:36:17.274754 containerd[2015]: time="2025-11-08T00:36:17.274559057Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:36:17.277574 containerd[2015]: time="2025-11-08T00:36:17.274731771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:36:17.277742 kubelet[3251]: E1108 00:36:17.276958 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:36:17.277742 kubelet[3251]: E1108 00:36:17.277015 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:36:17.277742 kubelet[3251]: E1108 00:36:17.277157 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l858l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xqwhf_calico-system(deabdfcd-c211-4fd0-a621-ac2732629dc7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:36:17.278388 kubelet[3251]: E1108 00:36:17.278339 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xqwhf" podUID="deabdfcd-c211-4fd0-a621-ac2732629dc7" Nov 8 00:36:18.158883 sshd[6549]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:18.167334 systemd-logind[1990]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:36:18.170483 systemd[1]: sshd@21-172.31.19.248:22-139.178.89.65:47222.service: Deactivated successfully. Nov 8 00:36:18.176404 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:36:18.186746 systemd-logind[1990]: Removed session 22. Nov 8 00:36:19.459786 systemd-journald[1503]: Under memory pressure, flushing caches. Nov 8 00:36:19.455710 systemd-resolved[1916]: Under memory pressure, flushing caches. Nov 8 00:36:19.455748 systemd-resolved[1916]: Flushed all caches. Nov 8 00:36:23.187661 systemd[1]: Started sshd@22-172.31.19.248:22-139.178.89.65:47232.service - OpenSSH per-connection server daemon (139.178.89.65:47232). Nov 8 00:36:23.402090 sshd[6565]: Accepted publickey for core from 139.178.89.65 port 47232 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:36:23.407821 sshd[6565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:23.419758 systemd-logind[1990]: New session 23 of user core. Nov 8 00:36:23.423161 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:36:24.293862 sshd[6565]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:24.301830 systemd-logind[1990]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:36:24.302695 systemd[1]: sshd@22-172.31.19.248:22-139.178.89.65:47232.service: Deactivated successfully. Nov 8 00:36:24.318359 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:36:24.320294 systemd-logind[1990]: Removed session 23. Nov 8 00:36:24.397432 kubelet[3251]: E1108 00:36:24.397121 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66479b5f68-gw447" podUID="f8a56697-875b-4b53-b7cd-550689e931a7" Nov 8 00:36:24.398250 kubelet[3251]: E1108 00:36:24.398128 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6765766d45-dksmq" podUID="d2ca12f7-a78a-4b5d-ab7c-23346dac65ff" Nov 8 00:36:25.397649 kubelet[3251]: E1108 00:36:25.396656 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d787d64-qzb5q" podUID="a8220629-4d1c-4d1f-829d-7ab1eb924825" Nov 8 00:36:25.410947 systemd-journald[1503]: Under memory pressure, flushing caches. Nov 8 00:36:25.410819 systemd-resolved[1916]: Under memory pressure, flushing caches. Nov 8 00:36:25.410847 systemd-resolved[1916]: Flushed all caches. Nov 8 00:36:26.397850 kubelet[3251]: E1108 00:36:26.397812 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d787d64-blwxn" podUID="3efb6459-6c10-4cbc-8627-eed63b51acf1" Nov 8 00:36:29.324536 systemd[1]: Started sshd@23-172.31.19.248:22-139.178.89.65:33036.service - OpenSSH per-connection server daemon (139.178.89.65:33036). Nov 8 00:36:29.536703 sshd[6601]: Accepted publickey for core from 139.178.89.65 port 33036 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:36:29.540380 sshd[6601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:29.552743 systemd-logind[1990]: New session 24 of user core. Nov 8 00:36:29.561016 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:36:29.888535 sshd[6601]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:29.895878 systemd[1]: sshd@23-172.31.19.248:22-139.178.89.65:33036.service: Deactivated successfully. Nov 8 00:36:29.906309 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:36:29.906519 systemd-logind[1990]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:36:29.911793 systemd-logind[1990]: Removed session 24. Nov 8 00:36:30.399938 kubelet[3251]: E1108 00:36:30.399252 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d6dbcdb77-gwjrt" podUID="95719bb6-d015-4d14-97fc-c6a4da2f553e" Nov 8 00:36:30.439412 kubelet[3251]: E1108 00:36:30.439272 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kblsp" podUID="941e9e80-4862-4e00-88e0-89c895eac1a2" Nov 8 00:36:32.406495 kubelet[3251]: E1108 00:36:32.406433 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xqwhf" podUID="deabdfcd-c211-4fd0-a621-ac2732629dc7" Nov 8 00:36:34.915169 systemd[1]: Started sshd@24-172.31.19.248:22-139.178.89.65:33044.service - OpenSSH per-connection server daemon (139.178.89.65:33044). Nov 8 00:36:35.073380 sshd[6618]: Accepted publickey for core from 139.178.89.65 port 33044 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:36:35.075090 sshd[6618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:35.080391 systemd-logind[1990]: New session 25 of user core. Nov 8 00:36:35.082979 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 8 00:36:35.280971 sshd[6618]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:35.284635 systemd[1]: sshd@24-172.31.19.248:22-139.178.89.65:33044.service: Deactivated successfully. Nov 8 00:36:35.288200 systemd-logind[1990]: Session 25 logged out. Waiting for processes to exit. Nov 8 00:36:35.288420 systemd[1]: session-25.scope: Deactivated successfully. Nov 8 00:36:35.290262 systemd-logind[1990]: Removed session 25. Nov 8 00:36:35.396872 kubelet[3251]: E1108 00:36:35.396825 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6765766d45-dksmq" podUID="d2ca12f7-a78a-4b5d-ab7c-23346dac65ff" Nov 8 00:36:38.399355 kubelet[3251]: E1108 00:36:38.399270 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66479b5f68-gw447" podUID="f8a56697-875b-4b53-b7cd-550689e931a7" Nov 8 00:36:39.396019 kubelet[3251]: E1108 00:36:39.395969 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d787d64-blwxn" podUID="3efb6459-6c10-4cbc-8627-eed63b51acf1" Nov 8 00:36:40.318004 systemd[1]: Started sshd@25-172.31.19.248:22-139.178.89.65:56432.service - OpenSSH per-connection server daemon (139.178.89.65:56432). Nov 8 00:36:40.402061 kubelet[3251]: E1108 00:36:40.401950 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d787d64-qzb5q" podUID="a8220629-4d1c-4d1f-829d-7ab1eb924825" Nov 8 00:36:40.535652 sshd[6632]: Accepted publickey for core from 139.178.89.65 port 56432 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:36:40.538500 sshd[6632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:40.550049 systemd-logind[1990]: New session 26 of user core. Nov 8 00:36:40.558784 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 8 00:36:40.824245 sshd[6632]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:40.829948 systemd-logind[1990]: Session 26 logged out. Waiting for processes to exit. Nov 8 00:36:40.831157 systemd[1]: sshd@25-172.31.19.248:22-139.178.89.65:56432.service: Deactivated successfully. Nov 8 00:36:40.836715 systemd[1]: session-26.scope: Deactivated successfully. Nov 8 00:36:40.839715 systemd-logind[1990]: Removed session 26. Nov 8 00:36:41.396340 kubelet[3251]: E1108 00:36:41.396265 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d6dbcdb77-gwjrt" podUID="95719bb6-d015-4d14-97fc-c6a4da2f553e" Nov 8 00:36:43.423874 kubelet[3251]: E1108 00:36:43.423800 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kblsp" podUID="941e9e80-4862-4e00-88e0-89c895eac1a2" Nov 8 00:36:44.398976 kubelet[3251]: E1108 00:36:44.398065 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xqwhf" podUID="deabdfcd-c211-4fd0-a621-ac2732629dc7" Nov 8 00:36:46.397344 kubelet[3251]: E1108 00:36:46.397273 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6765766d45-dksmq" podUID="d2ca12f7-a78a-4b5d-ab7c-23346dac65ff" Nov 8 00:36:49.395737 kubelet[3251]: E1108 00:36:49.395672 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66479b5f68-gw447" podUID="f8a56697-875b-4b53-b7cd-550689e931a7" Nov 8 00:36:51.395922 kubelet[3251]: E1108 00:36:51.395791 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d787d64-blwxn" podUID="3efb6459-6c10-4cbc-8627-eed63b51acf1" Nov 8 00:36:53.395808 kubelet[3251]: E1108 00:36:53.395763 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d6dbcdb77-gwjrt" podUID="95719bb6-d015-4d14-97fc-c6a4da2f553e" Nov 8 00:36:54.236066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53815cce14a84ec60e7227d23c8e91b7c943714fa7ecb0910eb148c0d0fb25b4-rootfs.mount: Deactivated successfully. Nov 8 00:36:54.280124 containerd[2015]: time="2025-11-08T00:36:54.250093292Z" level=info msg="shim disconnected" id=53815cce14a84ec60e7227d23c8e91b7c943714fa7ecb0910eb148c0d0fb25b4 namespace=k8s.io Nov 8 00:36:54.297296 containerd[2015]: time="2025-11-08T00:36:54.297220128Z" level=warning msg="cleaning up after shim disconnected" id=53815cce14a84ec60e7227d23c8e91b7c943714fa7ecb0910eb148c0d0fb25b4 namespace=k8s.io Nov 8 00:36:54.297296 containerd[2015]: time="2025-11-08T00:36:54.297266211Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:36:54.923987 kubelet[3251]: E1108 00:36:54.923898 3251 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.248:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-248?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 8 00:36:55.245319 kubelet[3251]: I1108 00:36:55.245268 3251 scope.go:117] "RemoveContainer" containerID="53815cce14a84ec60e7227d23c8e91b7c943714fa7ecb0910eb148c0d0fb25b4" Nov 8 00:36:55.247645 containerd[2015]: time="2025-11-08T00:36:55.247590831Z" level=info msg="CreateContainer within sandbox \"793e3d34225c4fbd12400f513c974d8daff028589ff777c25fa475137e4c735a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 8 00:36:55.271050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1564515510.mount: Deactivated successfully. Nov 8 00:36:55.271801 containerd[2015]: time="2025-11-08T00:36:55.271684691Z" level=info msg="CreateContainer within sandbox \"793e3d34225c4fbd12400f513c974d8daff028589ff777c25fa475137e4c735a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"cb8d81d62d0f24af40338ed844da2b38ecc058a67149e1c5f72269c2c91f0f63\"" Nov 8 00:36:55.272591 containerd[2015]: time="2025-11-08T00:36:55.272562632Z" level=info msg="StartContainer for \"cb8d81d62d0f24af40338ed844da2b38ecc058a67149e1c5f72269c2c91f0f63\"" Nov 8 00:36:55.367206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34ae0244bc2b9cb702c62fd142768cc9f7edc5718f6136b479a719e28648d9e6-rootfs.mount: Deactivated successfully. Nov 8 00:36:55.389035 containerd[2015]: time="2025-11-08T00:36:55.388511626Z" level=info msg="shim disconnected" id=34ae0244bc2b9cb702c62fd142768cc9f7edc5718f6136b479a719e28648d9e6 namespace=k8s.io Nov 8 00:36:55.389035 containerd[2015]: time="2025-11-08T00:36:55.388586561Z" level=warning msg="cleaning up after shim disconnected" id=34ae0244bc2b9cb702c62fd142768cc9f7edc5718f6136b479a719e28648d9e6 namespace=k8s.io Nov 8 00:36:55.389035 containerd[2015]: time="2025-11-08T00:36:55.388639128Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:36:55.400655 containerd[2015]: time="2025-11-08T00:36:55.400581368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:36:55.406531 containerd[2015]: time="2025-11-08T00:36:55.405486820Z" level=info msg="StartContainer for \"cb8d81d62d0f24af40338ed844da2b38ecc058a67149e1c5f72269c2c91f0f63\" returns successfully" Nov 8 00:36:55.422713 containerd[2015]: time="2025-11-08T00:36:55.421887664Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:36:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:36:55.721535 containerd[2015]: time="2025-11-08T00:36:55.721317902Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:36:55.723889 containerd[2015]: time="2025-11-08T00:36:55.723799365Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:36:55.724028 containerd[2015]: time="2025-11-08T00:36:55.723957952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:36:55.725292 kubelet[3251]: E1108 00:36:55.724849 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:36:55.725292 kubelet[3251]: E1108 00:36:55.725047 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:36:55.725292 kubelet[3251]: E1108 00:36:55.725229 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8h6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76d787d64-qzb5q_calico-apiserver(a8220629-4d1c-4d1f-829d-7ab1eb924825): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:36:55.726464 kubelet[3251]: E1108 00:36:55.726409 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d787d64-qzb5q" podUID="a8220629-4d1c-4d1f-829d-7ab1eb924825" Nov 8 00:36:56.240133 kubelet[3251]: I1108 00:36:56.239517 3251 scope.go:117] "RemoveContainer" containerID="34ae0244bc2b9cb702c62fd142768cc9f7edc5718f6136b479a719e28648d9e6" Nov 8 00:36:56.253973 containerd[2015]: time="2025-11-08T00:36:56.253735824Z" level=info msg="CreateContainer within sandbox \"8c9d6d3be4be5cd43be27b95299f5d2539763c38d7bbc1b908064ece17d13daa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 8 00:36:56.281246 containerd[2015]: time="2025-11-08T00:36:56.281022319Z" level=info msg="CreateContainer within sandbox \"8c9d6d3be4be5cd43be27b95299f5d2539763c38d7bbc1b908064ece17d13daa\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"cb70f1fd44914ca6bc3620953b9ef03ec4f9f5a8c6ed10dd0fdfa357ce9d5c2c\"" Nov 8 00:36:56.283890 containerd[2015]: time="2025-11-08T00:36:56.282808868Z" level=info msg="StartContainer for \"cb70f1fd44914ca6bc3620953b9ef03ec4f9f5a8c6ed10dd0fdfa357ce9d5c2c\"" Nov 8 00:36:56.356810 systemd[1]: run-containerd-runc-k8s.io-cb70f1fd44914ca6bc3620953b9ef03ec4f9f5a8c6ed10dd0fdfa357ce9d5c2c-runc.nNGkRp.mount: Deactivated successfully. Nov 8 00:36:56.406124 kubelet[3251]: E1108 00:36:56.405908 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xqwhf" podUID="deabdfcd-c211-4fd0-a621-ac2732629dc7" Nov 8 00:36:56.424117 containerd[2015]: time="2025-11-08T00:36:56.423891819Z" level=info msg="StartContainer for \"cb70f1fd44914ca6bc3620953b9ef03ec4f9f5a8c6ed10dd0fdfa357ce9d5c2c\" returns successfully" Nov 8 00:36:57.396098 containerd[2015]: time="2025-11-08T00:36:57.396058290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:36:57.702002 containerd[2015]: time="2025-11-08T00:36:57.701862027Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:36:57.704271 containerd[2015]: time="2025-11-08T00:36:57.704133241Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:36:57.704271 containerd[2015]: time="2025-11-08T00:36:57.704220584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:36:57.704422 kubelet[3251]: E1108 00:36:57.704343 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:36:57.704422 kubelet[3251]: E1108 00:36:57.704388 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:36:57.704839 kubelet[3251]: E1108 00:36:57.704596 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dcgbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kblsp_calico-system(941e9e80-4862-4e00-88e0-89c895eac1a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:36:57.704986 containerd[2015]: time="2025-11-08T00:36:57.704837911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:36:57.706420 kubelet[3251]: E1108 00:36:57.706381 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kblsp" podUID="941e9e80-4862-4e00-88e0-89c895eac1a2" Nov 8 00:36:57.973048 containerd[2015]: time="2025-11-08T00:36:57.972825105Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:36:57.975150 containerd[2015]: time="2025-11-08T00:36:57.974957206Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:36:57.975734 containerd[2015]: time="2025-11-08T00:36:57.975057708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:36:57.975808 kubelet[3251]: E1108 00:36:57.975505 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:36:57.975808 kubelet[3251]: E1108 00:36:57.975551 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:36:57.975808 kubelet[3251]: E1108 00:36:57.975690 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4404d727bb9a4ffdaf41a02b37a33d06,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bqcns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6765766d45-dksmq_calico-system(d2ca12f7-a78a-4b5d-ab7c-23346dac65ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:36:57.977923 containerd[2015]: time="2025-11-08T00:36:57.977872795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:36:58.253776 containerd[2015]: time="2025-11-08T00:36:58.253716843Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:36:58.255791 containerd[2015]: time="2025-11-08T00:36:58.255736562Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:36:58.255977 containerd[2015]: time="2025-11-08T00:36:58.255746768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:36:58.256029 kubelet[3251]: E1108 00:36:58.255962 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:36:58.256029 kubelet[3251]: E1108 00:36:58.256014 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:36:58.256204 kubelet[3251]: E1108 00:36:58.256159 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bqcns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6765766d45-dksmq_calico-system(d2ca12f7-a78a-4b5d-ab7c-23346dac65ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:36:58.257376 kubelet[3251]: E1108 00:36:58.257328 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6765766d45-dksmq" podUID="d2ca12f7-a78a-4b5d-ab7c-23346dac65ff" Nov 8 00:37:00.949042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05692ec79b6ced6cd7b2c175d0bd176040d3f5bce6088cf759cc9866fb04610e-rootfs.mount: Deactivated successfully. Nov 8 00:37:00.972316 containerd[2015]: time="2025-11-08T00:37:00.972245795Z" level=info msg="shim disconnected" id=05692ec79b6ced6cd7b2c175d0bd176040d3f5bce6088cf759cc9866fb04610e namespace=k8s.io Nov 8 00:37:00.972316 containerd[2015]: time="2025-11-08T00:37:00.972311638Z" level=warning msg="cleaning up after shim disconnected" id=05692ec79b6ced6cd7b2c175d0bd176040d3f5bce6088cf759cc9866fb04610e namespace=k8s.io Nov 8 00:37:00.972316 containerd[2015]: time="2025-11-08T00:37:00.972320554Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:37:01.254393 kubelet[3251]: I1108 00:37:01.254351 3251 scope.go:117] "RemoveContainer" containerID="05692ec79b6ced6cd7b2c175d0bd176040d3f5bce6088cf759cc9866fb04610e" Nov 8 00:37:01.256964 containerd[2015]: time="2025-11-08T00:37:01.256918590Z" level=info msg="CreateContainer within sandbox \"ce71d163e6a3f2d12e8200af3e4b0f5270474b212601b19158dde4490d2c7942\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 8 00:37:01.290488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2993259409.mount: Deactivated successfully. Nov 8 00:37:01.292363 containerd[2015]: time="2025-11-08T00:37:01.292287950Z" level=info msg="CreateContainer within sandbox \"ce71d163e6a3f2d12e8200af3e4b0f5270474b212601b19158dde4490d2c7942\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"df774cb66d35490027d3f872332b1965a3bf1887832a47f5fc0c4b24254150d0\"" Nov 8 00:37:01.295432 containerd[2015]: time="2025-11-08T00:37:01.295331790Z" level=info msg="StartContainer for \"df774cb66d35490027d3f872332b1965a3bf1887832a47f5fc0c4b24254150d0\"" Nov 8 00:37:01.392577 containerd[2015]: time="2025-11-08T00:37:01.392504435Z" level=info msg="StartContainer for \"df774cb66d35490027d3f872332b1965a3bf1887832a47f5fc0c4b24254150d0\" returns successfully" Nov 8 00:37:03.396312 containerd[2015]: time="2025-11-08T00:37:03.396246408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:37:03.666656 containerd[2015]: time="2025-11-08T00:37:03.666439639Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:03.668567 containerd[2015]: time="2025-11-08T00:37:03.668495544Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:37:03.668698 containerd[2015]: time="2025-11-08T00:37:03.668586245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:37:03.668782 kubelet[3251]: E1108 00:37:03.668737 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:37:03.669215 kubelet[3251]: E1108 00:37:03.668785 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:37:03.669215 kubelet[3251]: E1108 00:37:03.668913 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6lvqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66479b5f68-gw447_calico-apiserver(f8a56697-875b-4b53-b7cd-550689e931a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:03.670237 kubelet[3251]: E1108 00:37:03.670080 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66479b5f68-gw447" podUID="f8a56697-875b-4b53-b7cd-550689e931a7" Nov 8 00:37:04.396721 containerd[2015]: time="2025-11-08T00:37:04.396664206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:37:04.660389 containerd[2015]: time="2025-11-08T00:37:04.660260819Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:04.662398 containerd[2015]: time="2025-11-08T00:37:04.662336085Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:37:04.662563 containerd[2015]: time="2025-11-08T00:37:04.662430221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:37:04.662670 kubelet[3251]: E1108 00:37:04.662612 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:37:04.662733 kubelet[3251]: E1108 00:37:04.662680 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:37:04.663032 kubelet[3251]: E1108 00:37:04.662925 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7nz66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d6dbcdb77-gwjrt_calico-system(95719bb6-d015-4d14-97fc-c6a4da2f553e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:04.663330 containerd[2015]: time="2025-11-08T00:37:04.663286414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:37:04.664397 kubelet[3251]: E1108 00:37:04.664357 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d6dbcdb77-gwjrt" podUID="95719bb6-d015-4d14-97fc-c6a4da2f553e" Nov 8 00:37:04.924401 kubelet[3251]: E1108 00:37:04.924181 3251 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.248:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-248?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 8 00:37:04.951232 containerd[2015]: time="2025-11-08T00:37:04.951180727Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:04.953437 containerd[2015]: time="2025-11-08T00:37:04.953297475Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:37:04.953437 containerd[2015]: time="2025-11-08T00:37:04.953384301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:37:04.953815 kubelet[3251]: E1108 00:37:04.953770 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:37:04.953815 kubelet[3251]: E1108 00:37:04.953817 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:37:04.953976 kubelet[3251]: E1108 00:37:04.953936 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8v6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76d787d64-blwxn_calico-apiserver(3efb6459-6c10-4cbc-8627-eed63b51acf1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:04.955174 kubelet[3251]: E1108 00:37:04.955126 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d787d64-blwxn" podUID="3efb6459-6c10-4cbc-8627-eed63b51acf1" Nov 8 00:37:08.077861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb70f1fd44914ca6bc3620953b9ef03ec4f9f5a8c6ed10dd0fdfa357ce9d5c2c-rootfs.mount: Deactivated successfully. Nov 8 00:37:08.102988 containerd[2015]: time="2025-11-08T00:37:08.102932735Z" level=info msg="shim disconnected" id=cb70f1fd44914ca6bc3620953b9ef03ec4f9f5a8c6ed10dd0fdfa357ce9d5c2c namespace=k8s.io Nov 8 00:37:08.102988 containerd[2015]: time="2025-11-08T00:37:08.102984538Z" level=warning msg="cleaning up after shim disconnected" id=cb70f1fd44914ca6bc3620953b9ef03ec4f9f5a8c6ed10dd0fdfa357ce9d5c2c namespace=k8s.io Nov 8 00:37:08.102988 containerd[2015]: time="2025-11-08T00:37:08.102992871Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:37:08.274693 kubelet[3251]: I1108 00:37:08.274601 3251 scope.go:117] "RemoveContainer" containerID="34ae0244bc2b9cb702c62fd142768cc9f7edc5718f6136b479a719e28648d9e6" Nov 8 00:37:08.275119 kubelet[3251]: I1108 00:37:08.274766 3251 scope.go:117] "RemoveContainer" containerID="cb70f1fd44914ca6bc3620953b9ef03ec4f9f5a8c6ed10dd0fdfa357ce9d5c2c" Nov 8 00:37:08.275119 kubelet[3251]: E1108 00:37:08.274928 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-btbvj_tigera-operator(41717960-a796-4d88-b39b-253ce3f3dc3e)\"" pod="tigera-operator/tigera-operator-7dcd859c48-btbvj" podUID="41717960-a796-4d88-b39b-253ce3f3dc3e" Nov 8 00:37:08.303048 containerd[2015]: time="2025-11-08T00:37:08.302988119Z" level=info msg="RemoveContainer for \"34ae0244bc2b9cb702c62fd142768cc9f7edc5718f6136b479a719e28648d9e6\"" Nov 8 00:37:08.308578 containerd[2015]: time="2025-11-08T00:37:08.308527458Z" level=info msg="RemoveContainer for \"34ae0244bc2b9cb702c62fd142768cc9f7edc5718f6136b479a719e28648d9e6\" returns successfully" Nov 8 00:37:09.395836 containerd[2015]: time="2025-11-08T00:37:09.395794881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:37:09.673409 containerd[2015]: time="2025-11-08T00:37:09.673265747Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:09.675428 containerd[2015]: time="2025-11-08T00:37:09.675319482Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:37:09.675428 containerd[2015]: time="2025-11-08T00:37:09.675377154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:37:09.675608 kubelet[3251]: E1108 00:37:09.675562 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:37:09.675970 kubelet[3251]: E1108 00:37:09.675602 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:37:09.675970 kubelet[3251]: E1108 00:37:09.675728 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l858l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xqwhf_calico-system(deabdfcd-c211-4fd0-a621-ac2732629dc7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:09.677662 containerd[2015]: time="2025-11-08T00:37:09.677619793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:37:09.946582 containerd[2015]: time="2025-11-08T00:37:09.946452071Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:37:09.948689 containerd[2015]: time="2025-11-08T00:37:09.948537937Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:37:09.948689 containerd[2015]: time="2025-11-08T00:37:09.948639722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:37:09.948831 kubelet[3251]: E1108 00:37:09.948789 3251 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:37:09.948889 kubelet[3251]: E1108 00:37:09.948838 3251 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:37:09.949018 kubelet[3251]: E1108 00:37:09.948962 3251 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l858l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xqwhf_calico-system(deabdfcd-c211-4fd0-a621-ac2732629dc7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:37:09.950348 kubelet[3251]: E1108 00:37:09.950298 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xqwhf" podUID="deabdfcd-c211-4fd0-a621-ac2732629dc7" Nov 8 00:37:11.395516 kubelet[3251]: E1108 00:37:11.395469 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d787d64-qzb5q" podUID="a8220629-4d1c-4d1f-829d-7ab1eb924825" Nov 8 00:37:12.396073 kubelet[3251]: E1108 00:37:12.396026 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6765766d45-dksmq" podUID="d2ca12f7-a78a-4b5d-ab7c-23346dac65ff" Nov 8 00:37:13.395511 kubelet[3251]: E1108 00:37:13.395469 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kblsp" podUID="941e9e80-4862-4e00-88e0-89c895eac1a2" Nov 8 00:37:14.931221 kubelet[3251]: E1108 00:37:14.930763 3251 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.248:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-248?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 8 00:37:15.395768 kubelet[3251]: E1108 00:37:15.395654 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66479b5f68-gw447" podUID="f8a56697-875b-4b53-b7cd-550689e931a7" Nov 8 00:37:15.395768 kubelet[3251]: E1108 00:37:15.395756 3251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76d787d64-blwxn" podUID="3efb6459-6c10-4cbc-8627-eed63b51acf1"