Nov 1 00:22:00.906831 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 00:22:00.906871 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:22:00.906891 kernel: BIOS-provided physical RAM map: Nov 1 00:22:00.906902 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 1 00:22:00.906912 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Nov 1 00:22:00.906921 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Nov 1 00:22:00.906934 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Nov 1 00:22:00.906946 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Nov 1 00:22:00.906958 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Nov 1 00:22:00.906974 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Nov 1 00:22:00.906986 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Nov 1 00:22:00.906998 kernel: NX (Execute Disable) protection: active Nov 1 00:22:00.907008 kernel: APIC: Static calls initialized Nov 1 00:22:00.907019 kernel: efi: EFI v2.7 by EDK II Nov 1 00:22:00.907032 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77002518 Nov 1 00:22:00.907048 kernel: SMBIOS 2.7 present. Nov 1 00:22:00.907059 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Nov 1 00:22:00.907071 kernel: Hypervisor detected: KVM Nov 1 00:22:00.907083 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:22:00.907096 kernel: kvm-clock: using sched offset of 3874485429 cycles Nov 1 00:22:00.907110 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:22:00.907123 kernel: tsc: Detected 2499.994 MHz processor Nov 1 00:22:00.907135 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:22:00.907148 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:22:00.907160 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Nov 1 00:22:00.907175 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 1 00:22:00.907188 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:22:00.907201 kernel: Using GB pages for direct mapping Nov 1 00:22:00.907213 kernel: Secure boot disabled Nov 1 00:22:00.907227 kernel: ACPI: Early table checksum verification disabled Nov 1 00:22:00.907239 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Nov 1 00:22:00.907253 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Nov 1 00:22:00.907266 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 1 00:22:00.907279 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Nov 1 00:22:00.907294 kernel: ACPI: FACS 0x00000000789D0000 000040 Nov 1 00:22:00.907307 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Nov 1 00:22:00.907319 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 1 00:22:00.907331 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 1 00:22:00.907342 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Nov 1 00:22:00.907355 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Nov 1 00:22:00.907385 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 1 00:22:00.907400 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 1 00:22:00.907414 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Nov 1 00:22:00.907427 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Nov 1 00:22:00.907440 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Nov 1 00:22:00.907455 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Nov 1 00:22:00.907470 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Nov 1 00:22:00.907488 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Nov 1 00:22:00.907503 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Nov 1 00:22:00.907518 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Nov 1 00:22:00.907533 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Nov 1 00:22:00.907547 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Nov 1 00:22:00.907562 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Nov 1 00:22:00.907576 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Nov 1 00:22:00.907591 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 00:22:00.907606 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 1 00:22:00.907621 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Nov 1 00:22:00.907641 kernel: NUMA: Initialized distance table, cnt=1 Nov 1 00:22:00.907655 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Nov 1 00:22:00.907669 kernel: Zone ranges: Nov 1 00:22:00.907684 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:22:00.907699 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Nov 1 00:22:00.907726 kernel: Normal empty Nov 1 00:22:00.907740 kernel: Movable zone start for each node Nov 1 00:22:00.907767 kernel: Early memory node ranges Nov 1 00:22:00.908888 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 1 00:22:00.908910 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Nov 1 00:22:00.908925 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Nov 1 00:22:00.908939 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Nov 1 00:22:00.908951 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:22:00.908963 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 1 00:22:00.908976 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 1 00:22:00.908989 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Nov 1 00:22:00.909003 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 1 00:22:00.909016 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:22:00.909034 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Nov 1 00:22:00.909047 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:22:00.909061 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:22:00.909074 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:22:00.909087 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:22:00.909101 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:22:00.909114 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:22:00.909128 kernel: TSC deadline timer available Nov 1 00:22:00.909141 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 00:22:00.909158 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 1 00:22:00.909171 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Nov 1 00:22:00.909184 kernel: Booting paravirtualized kernel on KVM Nov 1 00:22:00.909198 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:22:00.909212 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 1 00:22:00.909225 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 1 00:22:00.909238 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 1 00:22:00.909251 kernel: pcpu-alloc: [0] 0 1 Nov 1 00:22:00.909264 kernel: kvm-guest: PV spinlocks enabled Nov 1 00:22:00.909277 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:22:00.909295 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:22:00.909309 kernel: random: crng init done Nov 1 00:22:00.909323 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:22:00.909336 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 00:22:00.909349 kernel: Fallback order for Node 0: 0 Nov 1 00:22:00.909363 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Nov 1 00:22:00.909376 kernel: Policy zone: DMA32 Nov 1 00:22:00.909389 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:22:00.909405 kernel: Memory: 1874604K/2037804K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 162940K reserved, 0K cma-reserved) Nov 1 00:22:00.909419 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:22:00.909432 kernel: Kernel/User page tables isolation: enabled Nov 1 00:22:00.909445 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 00:22:00.909458 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 00:22:00.909472 kernel: Dynamic Preempt: voluntary Nov 1 00:22:00.909484 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:22:00.909503 kernel: rcu: RCU event tracing is enabled. Nov 1 00:22:00.909517 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:22:00.909533 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:22:00.909547 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:22:00.909561 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:22:00.909574 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:22:00.909587 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:22:00.909600 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 1 00:22:00.909614 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:22:00.909642 kernel: Console: colour dummy device 80x25 Nov 1 00:22:00.909656 kernel: printk: console [tty0] enabled Nov 1 00:22:00.909670 kernel: printk: console [ttyS0] enabled Nov 1 00:22:00.909685 kernel: ACPI: Core revision 20230628 Nov 1 00:22:00.909703 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Nov 1 00:22:00.909717 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:22:00.909731 kernel: x2apic enabled Nov 1 00:22:00.909746 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 00:22:00.910802 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Nov 1 00:22:00.910823 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) Nov 1 00:22:00.910837 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 1 00:22:00.910852 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 1 00:22:00.910869 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:22:00.910884 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:22:00.910899 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:22:00.910915 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 1 00:22:00.910932 kernel: RETBleed: Vulnerable Nov 1 00:22:00.910947 kernel: Speculative Store Bypass: Vulnerable Nov 1 00:22:00.910962 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:22:00.910982 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:22:00.910998 kernel: GDS: Unknown: Dependent on hypervisor status Nov 1 00:22:00.911013 kernel: active return thunk: its_return_thunk Nov 1 00:22:00.911028 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 00:22:00.911044 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:22:00.911059 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:22:00.911075 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:22:00.911091 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 1 00:22:00.911106 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 1 00:22:00.911122 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 1 00:22:00.911138 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 1 00:22:00.911157 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 1 00:22:00.911172 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 1 00:22:00.911188 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:22:00.911203 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 1 00:22:00.911219 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 1 00:22:00.911234 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Nov 1 00:22:00.911250 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Nov 1 00:22:00.911265 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Nov 1 00:22:00.911280 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Nov 1 00:22:00.911296 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Nov 1 00:22:00.911311 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:22:00.911327 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:22:00.911345 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 00:22:00.911361 kernel: landlock: Up and running. Nov 1 00:22:00.911376 kernel: SELinux: Initializing. Nov 1 00:22:00.911392 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:22:00.911408 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:22:00.911423 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 1 00:22:00.911436 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:22:00.911449 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:22:00.911462 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:22:00.911475 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 1 00:22:00.911493 kernel: signal: max sigframe size: 3632 Nov 1 00:22:00.911507 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:22:00.911522 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:22:00.911535 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 00:22:00.911549 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:22:00.911563 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:22:00.911577 kernel: .... node #0, CPUs: #1 Nov 1 00:22:00.911592 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 1 00:22:00.911610 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 00:22:00.911628 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:22:00.911642 kernel: smpboot: Max logical packages: 1 Nov 1 00:22:00.911656 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) Nov 1 00:22:00.911672 kernel: devtmpfs: initialized Nov 1 00:22:00.911687 kernel: x86/mm: Memory block size: 128MB Nov 1 00:22:00.911702 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Nov 1 00:22:00.913501 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:22:00.913518 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:22:00.913538 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:22:00.913552 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:22:00.913568 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:22:00.913585 kernel: audit: type=2000 audit(1761956519.991:1): state=initialized audit_enabled=0 res=1 Nov 1 00:22:00.913601 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:22:00.913617 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:22:00.913633 kernel: cpuidle: using governor menu Nov 1 00:22:00.913649 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:22:00.913665 kernel: dca service started, version 1.12.1 Nov 1 00:22:00.913684 kernel: PCI: Using configuration type 1 for base access Nov 1 00:22:00.913701 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:22:00.913717 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:22:00.913733 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 00:22:00.913763 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:22:00.913779 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:22:00.913795 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:22:00.913811 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:22:00.913827 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:22:00.913847 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 1 00:22:00.913864 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 1 00:22:00.913879 kernel: ACPI: Interpreter enabled Nov 1 00:22:00.913895 kernel: ACPI: PM: (supports S0 S5) Nov 1 00:22:00.913911 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:22:00.913927 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:22:00.913943 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 00:22:00.913959 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 1 00:22:00.913975 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:22:00.914206 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:22:00.914365 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 1 00:22:00.914499 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 1 00:22:00.914517 kernel: acpiphp: Slot [3] registered Nov 1 00:22:00.914533 kernel: acpiphp: Slot [4] registered Nov 1 00:22:00.914548 kernel: acpiphp: Slot [5] registered Nov 1 00:22:00.914564 kernel: acpiphp: Slot [6] registered Nov 1 00:22:00.914579 kernel: acpiphp: Slot [7] registered Nov 1 00:22:00.914598 kernel: acpiphp: Slot [8] registered Nov 1 00:22:00.914614 kernel: acpiphp: Slot [9] registered Nov 1 00:22:00.914629 kernel: acpiphp: Slot [10] registered Nov 1 00:22:00.914645 kernel: acpiphp: Slot [11] registered Nov 1 00:22:00.914659 kernel: acpiphp: Slot [12] registered Nov 1 00:22:00.914675 kernel: acpiphp: Slot [13] registered Nov 1 00:22:00.914689 kernel: acpiphp: Slot [14] registered Nov 1 00:22:00.914705 kernel: acpiphp: Slot [15] registered Nov 1 00:22:00.914722 kernel: acpiphp: Slot [16] registered Nov 1 00:22:00.914741 kernel: acpiphp: Slot [17] registered Nov 1 00:22:00.916117 kernel: acpiphp: Slot [18] registered Nov 1 00:22:00.916138 kernel: acpiphp: Slot [19] registered Nov 1 00:22:00.916154 kernel: acpiphp: Slot [20] registered Nov 1 00:22:00.916169 kernel: acpiphp: Slot [21] registered Nov 1 00:22:00.916185 kernel: acpiphp: Slot [22] registered Nov 1 00:22:00.916200 kernel: acpiphp: Slot [23] registered Nov 1 00:22:00.916216 kernel: acpiphp: Slot [24] registered Nov 1 00:22:00.916232 kernel: acpiphp: Slot [25] registered Nov 1 00:22:00.916249 kernel: acpiphp: Slot [26] registered Nov 1 00:22:00.916269 kernel: acpiphp: Slot [27] registered Nov 1 00:22:00.916286 kernel: acpiphp: Slot [28] registered Nov 1 00:22:00.916301 kernel: acpiphp: Slot [29] registered Nov 1 00:22:00.916317 kernel: acpiphp: Slot [30] registered Nov 1 00:22:00.916333 kernel: acpiphp: Slot [31] registered Nov 1 00:22:00.916346 kernel: PCI host bridge to bus 0000:00 Nov 1 00:22:00.916522 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:22:00.916657 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:22:00.917870 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:22:00.918026 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 1 00:22:00.918165 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Nov 1 00:22:00.918303 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:22:00.918488 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 1 00:22:00.918651 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 1 00:22:00.919924 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Nov 1 00:22:00.920095 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 1 00:22:00.920248 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Nov 1 00:22:00.920382 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Nov 1 00:22:00.920515 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Nov 1 00:22:00.920645 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Nov 1 00:22:00.921829 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Nov 1 00:22:00.921999 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Nov 1 00:22:00.922160 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Nov 1 00:22:00.922299 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Nov 1 00:22:00.922432 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Nov 1 00:22:00.922562 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Nov 1 00:22:00.922694 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:22:00.924550 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Nov 1 00:22:00.924701 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Nov 1 00:22:00.924896 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Nov 1 00:22:00.925031 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Nov 1 00:22:00.925052 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:22:00.925068 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:22:00.925083 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:22:00.925098 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:22:00.925113 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 1 00:22:00.925133 kernel: iommu: Default domain type: Translated Nov 1 00:22:00.925148 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:22:00.925163 kernel: efivars: Registered efivars operations Nov 1 00:22:00.925179 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:22:00.925194 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:22:00.925210 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Nov 1 00:22:00.925225 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Nov 1 00:22:00.925359 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Nov 1 00:22:00.925508 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Nov 1 00:22:00.925646 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:22:00.925665 kernel: vgaarb: loaded Nov 1 00:22:00.925680 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Nov 1 00:22:00.925695 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Nov 1 00:22:00.925709 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:22:00.925724 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:22:00.925739 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:22:00.925797 kernel: pnp: PnP ACPI init Nov 1 00:22:00.925818 kernel: pnp: PnP ACPI: found 5 devices Nov 1 00:22:00.925835 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:22:00.925851 kernel: NET: Registered PF_INET protocol family Nov 1 00:22:00.925866 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:22:00.925881 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 1 00:22:00.925897 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:22:00.925912 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:22:00.925929 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 00:22:00.925944 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 1 00:22:00.925963 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:22:00.925979 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:22:00.925996 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:22:00.926012 kernel: NET: Registered PF_XDP protocol family Nov 1 00:22:00.926154 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:22:00.926285 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:22:00.926401 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:22:00.926518 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 1 00:22:00.930725 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Nov 1 00:22:00.930943 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 1 00:22:00.930970 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:22:00.930988 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 00:22:00.931005 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Nov 1 00:22:00.931021 kernel: clocksource: Switched to clocksource tsc Nov 1 00:22:00.931037 kernel: Initialise system trusted keyrings Nov 1 00:22:00.931053 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 1 00:22:00.931069 kernel: Key type asymmetric registered Nov 1 00:22:00.931092 kernel: Asymmetric key parser 'x509' registered Nov 1 00:22:00.931107 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 00:22:00.931123 kernel: io scheduler mq-deadline registered Nov 1 00:22:00.931139 kernel: io scheduler kyber registered Nov 1 00:22:00.931155 kernel: io scheduler bfq registered Nov 1 00:22:00.931170 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:22:00.931187 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:22:00.931203 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:22:00.931219 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:22:00.931238 kernel: i8042: Warning: Keylock active Nov 1 00:22:00.931254 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:22:00.931269 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:22:00.931442 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 1 00:22:00.931582 kernel: rtc_cmos 00:00: registered as rtc0 Nov 1 00:22:00.931727 kernel: rtc_cmos 00:00: setting system clock to 2025-11-01T00:22:00 UTC (1761956520) Nov 1 00:22:00.931900 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 1 00:22:00.931924 kernel: intel_pstate: CPU model not supported Nov 1 00:22:00.931938 kernel: efifb: probing for efifb Nov 1 00:22:00.931952 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Nov 1 00:22:00.931966 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Nov 1 00:22:00.931982 kernel: efifb: scrolling: redraw Nov 1 00:22:00.931996 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 1 00:22:00.932012 kernel: Console: switching to colour frame buffer device 100x37 Nov 1 00:22:00.932028 kernel: fb0: EFI VGA frame buffer device Nov 1 00:22:00.932042 kernel: pstore: Using crash dump compression: deflate Nov 1 00:22:00.932057 kernel: pstore: Registered efi_pstore as persistent store backend Nov 1 00:22:00.932073 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:22:00.932087 kernel: Segment Routing with IPv6 Nov 1 00:22:00.932100 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:22:00.932115 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:22:00.932132 kernel: Key type dns_resolver registered Nov 1 00:22:00.932148 kernel: IPI shorthand broadcast: enabled Nov 1 00:22:00.932186 kernel: sched_clock: Marking stable (461002641, 176318747)->(736433876, -99112488) Nov 1 00:22:00.932207 kernel: registered taskstats version 1 Nov 1 00:22:00.932224 kernel: Loading compiled-in X.509 certificates Nov 1 00:22:00.932245 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 00:22:00.932260 kernel: Key type .fscrypt registered Nov 1 00:22:00.932277 kernel: Key type fscrypt-provisioning registered Nov 1 00:22:00.932293 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:22:00.932310 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:22:00.932327 kernel: ima: No architecture policies found Nov 1 00:22:00.932343 kernel: clk: Disabling unused clocks Nov 1 00:22:00.932359 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 00:22:00.932376 kernel: Write protecting the kernel read-only data: 36864k Nov 1 00:22:00.932398 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 00:22:00.932415 kernel: Run /init as init process Nov 1 00:22:00.932432 kernel: with arguments: Nov 1 00:22:00.932449 kernel: /init Nov 1 00:22:00.932466 kernel: with environment: Nov 1 00:22:00.932482 kernel: HOME=/ Nov 1 00:22:00.932498 kernel: TERM=linux Nov 1 00:22:00.932518 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:22:00.932543 systemd[1]: Detected virtualization amazon. Nov 1 00:22:00.932562 systemd[1]: Detected architecture x86-64. Nov 1 00:22:00.932579 systemd[1]: Running in initrd. Nov 1 00:22:00.932596 systemd[1]: No hostname configured, using default hostname. Nov 1 00:22:00.932614 systemd[1]: Hostname set to . Nov 1 00:22:00.932632 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:22:00.932649 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:22:00.932666 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:22:00.932687 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:22:00.932705 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:22:00.932723 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:22:00.932741 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:22:00.932793 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:22:00.932819 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 00:22:00.932839 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 00:22:00.932857 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:22:00.932875 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:22:00.932893 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:22:00.932910 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:22:00.932930 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:22:00.932955 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:22:00.932972 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:22:00.932991 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:22:00.933010 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:22:00.933029 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:22:00.933048 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:22:00.933068 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:22:00.933087 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:22:00.933108 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:22:00.933131 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:22:00.933151 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:22:00.933171 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:22:00.933191 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:22:00.933212 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:22:00.933232 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:22:00.933282 systemd-journald[178]: Collecting audit messages is disabled. Nov 1 00:22:00.933326 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:22:00.933346 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:22:00.933365 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:22:00.933384 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:22:00.933407 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:22:00.933427 systemd-journald[178]: Journal started Nov 1 00:22:00.933464 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2963bc2a33d6ddab01777e23c188ba) is 4.7M, max 38.2M, 33.4M free. Nov 1 00:22:00.939368 systemd-modules-load[179]: Inserted module 'overlay' Nov 1 00:22:00.947776 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:22:00.947840 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:22:00.950983 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:22:00.964105 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:22:00.967957 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:22:00.970591 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:22:00.994778 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:22:00.996953 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:22:01.002864 kernel: Bridge firewalling registered Nov 1 00:22:01.000515 systemd-modules-load[179]: Inserted module 'br_netfilter' Nov 1 00:22:01.004173 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:22:01.007196 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:22:01.011230 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:22:01.014268 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:22:01.029042 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:22:01.034249 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:22:01.047700 dracut-cmdline[205]: dracut-dracut-053 Nov 1 00:22:01.050739 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:22:01.052282 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:22:01.063231 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:22:01.112617 systemd-resolved[229]: Positive Trust Anchors: Nov 1 00:22:01.112641 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:22:01.112702 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:22:01.121372 systemd-resolved[229]: Defaulting to hostname 'linux'. Nov 1 00:22:01.124429 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:22:01.125990 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:22:01.152816 kernel: SCSI subsystem initialized Nov 1 00:22:01.164800 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:22:01.176787 kernel: iscsi: registered transport (tcp) Nov 1 00:22:01.205043 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:22:01.205122 kernel: QLogic iSCSI HBA Driver Nov 1 00:22:01.250900 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:22:01.257984 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:22:01.297834 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:22:01.297915 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:22:01.301874 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 00:22:01.357811 kernel: raid6: avx512x4 gen() 14172 MB/s Nov 1 00:22:01.375976 kernel: raid6: avx512x2 gen() 8468 MB/s Nov 1 00:22:01.395809 kernel: raid6: avx512x1 gen() 11526 MB/s Nov 1 00:22:01.413803 kernel: raid6: avx2x4 gen() 2518 MB/s Nov 1 00:22:01.431946 kernel: raid6: avx2x2 gen() 7259 MB/s Nov 1 00:22:01.452553 kernel: raid6: avx2x1 gen() 6978 MB/s Nov 1 00:22:01.452690 kernel: raid6: using algorithm avx512x4 gen() 14172 MB/s Nov 1 00:22:01.480402 kernel: raid6: .... xor() 1404 MB/s, rmw enabled Nov 1 00:22:01.480481 kernel: raid6: using avx512x2 recovery algorithm Nov 1 00:22:01.577958 kernel: xor: automatically using best checksumming function avx Nov 1 00:22:01.955788 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:22:01.973860 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:22:01.980267 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:22:01.999559 systemd-udevd[398]: Using default interface naming scheme 'v255'. Nov 1 00:22:02.014125 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:22:02.026187 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:22:02.102392 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Nov 1 00:22:02.270496 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:22:02.280999 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:22:02.386251 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:22:02.410093 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:22:02.472432 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:22:02.477136 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:22:02.479488 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:22:02.481296 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:22:02.491422 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:22:02.526669 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:22:02.539176 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 1 00:22:02.539450 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 1 00:22:02.546852 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:22:02.555779 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Nov 1 00:22:02.574469 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:22:02.575271 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:5c:2d:5d:87:a7 Nov 1 00:22:02.575886 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:22:02.578602 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:22:02.582317 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:22:02.583121 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:22:02.587783 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:22:02.587851 kernel: AES CTR mode by8 optimization enabled Nov 1 00:22:02.587402 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:22:02.590416 (udev-worker)[457]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:22:02.602323 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:22:02.611079 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:22:02.612677 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:22:02.625235 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:22:02.634784 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 1 00:22:02.642493 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 1 00:22:02.649601 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:22:02.656775 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 1 00:22:02.659153 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:22:02.666820 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:22:02.666894 kernel: GPT:9289727 != 33554431 Nov 1 00:22:02.667843 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:22:02.670468 kernel: GPT:9289727 != 33554431 Nov 1 00:22:02.672653 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:22:02.672713 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 1 00:22:02.691650 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:22:02.796779 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (456) Nov 1 00:22:02.811795 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (459) Nov 1 00:22:02.877648 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 1 00:22:02.899050 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 1 00:22:02.906071 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 1 00:22:02.913361 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Nov 1 00:22:02.913959 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 1 00:22:02.922161 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:22:02.931883 disk-uuid[631]: Primary Header is updated. Nov 1 00:22:02.931883 disk-uuid[631]: Secondary Entries is updated. Nov 1 00:22:02.931883 disk-uuid[631]: Secondary Header is updated. Nov 1 00:22:02.938879 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 1 00:22:02.947882 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 1 00:22:02.955802 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 1 00:22:03.959277 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 1 00:22:03.960528 disk-uuid[632]: The operation has completed successfully. Nov 1 00:22:04.131478 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:22:04.131626 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:22:04.157052 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 00:22:04.162811 sh[975]: Success Nov 1 00:22:04.206879 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 00:22:04.329907 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:22:04.337916 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 00:22:04.341053 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 00:22:04.384919 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 00:22:04.385000 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:22:04.385022 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 00:22:04.387188 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:22:04.388569 kernel: BTRFS info (device dm-0): using free space tree Nov 1 00:22:04.522820 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 1 00:22:04.547982 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 00:22:04.549428 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:22:04.561014 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:22:04.563981 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:22:04.589015 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:22:04.589090 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:22:04.591406 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 1 00:22:04.607880 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 1 00:22:04.620003 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:22:04.619635 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:22:04.627398 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:22:04.635011 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:22:04.676206 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:22:04.681983 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:22:04.706185 systemd-networkd[1167]: lo: Link UP Nov 1 00:22:04.706198 systemd-networkd[1167]: lo: Gained carrier Nov 1 00:22:04.708054 systemd-networkd[1167]: Enumeration completed Nov 1 00:22:04.708523 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:22:04.708529 systemd-networkd[1167]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:22:04.708868 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:22:04.710545 systemd[1]: Reached target network.target - Network. Nov 1 00:22:04.712191 systemd-networkd[1167]: eth0: Link UP Nov 1 00:22:04.712197 systemd-networkd[1167]: eth0: Gained carrier Nov 1 00:22:04.712211 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:22:04.720866 systemd-networkd[1167]: eth0: DHCPv4 address 172.31.30.202/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 1 00:22:04.982620 ignition[1116]: Ignition 2.19.0 Nov 1 00:22:04.982634 ignition[1116]: Stage: fetch-offline Nov 1 00:22:04.982921 ignition[1116]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:04.982934 ignition[1116]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:22:04.983404 ignition[1116]: Ignition finished successfully Nov 1 00:22:04.985362 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:22:04.993977 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 1 00:22:05.009776 ignition[1177]: Ignition 2.19.0 Nov 1 00:22:05.009790 ignition[1177]: Stage: fetch Nov 1 00:22:05.010283 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:05.010297 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:22:05.010420 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:22:05.020199 ignition[1177]: PUT result: OK Nov 1 00:22:05.022017 ignition[1177]: parsed url from cmdline: "" Nov 1 00:22:05.022028 ignition[1177]: no config URL provided Nov 1 00:22:05.022037 ignition[1177]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:22:05.022051 ignition[1177]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:22:05.022085 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:22:05.022766 ignition[1177]: PUT result: OK Nov 1 00:22:05.022822 ignition[1177]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 1 00:22:05.023501 ignition[1177]: GET result: OK Nov 1 00:22:05.023615 ignition[1177]: parsing config with SHA512: a4cf7d812bfe834928dd1564b1fcab347bf669ba8a58af9378e6aaf97db4ceacd4d16a5f521c1910cd32b5f04612e70a0953d890a0d5b1094a8b4bffe47ce4a1 Nov 1 00:22:05.029621 unknown[1177]: fetched base config from "system" Nov 1 00:22:05.029636 unknown[1177]: fetched base config from "system" Nov 1 00:22:05.030298 ignition[1177]: fetch: fetch complete Nov 1 00:22:05.029644 unknown[1177]: fetched user config from "aws" Nov 1 00:22:05.030305 ignition[1177]: fetch: fetch passed Nov 1 00:22:05.030379 ignition[1177]: Ignition finished successfully Nov 1 00:22:05.032980 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 1 00:22:05.036968 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:22:05.056104 ignition[1183]: Ignition 2.19.0 Nov 1 00:22:05.056117 ignition[1183]: Stage: kargs Nov 1 00:22:05.056580 ignition[1183]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:05.056594 ignition[1183]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:22:05.056719 ignition[1183]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:22:05.057612 ignition[1183]: PUT result: OK Nov 1 00:22:05.060133 ignition[1183]: kargs: kargs passed Nov 1 00:22:05.060222 ignition[1183]: Ignition finished successfully Nov 1 00:22:05.062455 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:22:05.067984 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:22:05.082903 ignition[1189]: Ignition 2.19.0 Nov 1 00:22:05.082916 ignition[1189]: Stage: disks Nov 1 00:22:05.083375 ignition[1189]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:05.083388 ignition[1189]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:22:05.083509 ignition[1189]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:22:05.084707 ignition[1189]: PUT result: OK Nov 1 00:22:05.087326 ignition[1189]: disks: disks passed Nov 1 00:22:05.087402 ignition[1189]: Ignition finished successfully Nov 1 00:22:05.089532 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:22:05.090149 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:22:05.090497 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:22:05.091090 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:22:05.091632 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:22:05.092325 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:22:05.096970 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:22:05.134510 systemd-fsck[1197]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 1 00:22:05.137290 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:22:05.144017 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:22:05.244043 kernel: EXT4-fs (nvme0n1p9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 00:22:05.244696 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:22:05.245637 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:22:05.263316 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:22:05.265799 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:22:05.266554 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 1 00:22:05.266597 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:22:05.266624 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:22:05.273841 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:22:05.275997 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:22:05.287876 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1216) Nov 1 00:22:05.291877 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:22:05.291950 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:22:05.294277 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 1 00:22:05.306783 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 1 00:22:05.307903 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:22:05.647559 initrd-setup-root[1240]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:22:05.663535 initrd-setup-root[1247]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:22:05.668460 initrd-setup-root[1254]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:22:05.672431 initrd-setup-root[1261]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:22:05.948244 systemd-networkd[1167]: eth0: Gained IPv6LL Nov 1 00:22:05.977347 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:22:05.982896 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:22:05.985952 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:22:05.995455 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:22:05.997822 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:22:06.028809 ignition[1328]: INFO : Ignition 2.19.0 Nov 1 00:22:06.028809 ignition[1328]: INFO : Stage: mount Nov 1 00:22:06.028809 ignition[1328]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:06.028809 ignition[1328]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:22:06.028809 ignition[1328]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:22:06.036443 ignition[1328]: INFO : PUT result: OK Nov 1 00:22:06.036443 ignition[1328]: INFO : mount: mount passed Nov 1 00:22:06.036443 ignition[1328]: INFO : Ignition finished successfully Nov 1 00:22:06.036395 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:22:06.044919 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:22:06.048680 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:22:06.060044 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:22:06.079967 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1340) Nov 1 00:22:06.083964 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:22:06.084023 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:22:06.084037 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 1 00:22:06.091790 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 1 00:22:06.092587 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:22:06.112537 ignition[1356]: INFO : Ignition 2.19.0 Nov 1 00:22:06.112537 ignition[1356]: INFO : Stage: files Nov 1 00:22:06.114037 ignition[1356]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:06.114037 ignition[1356]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:22:06.114037 ignition[1356]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:22:06.115289 ignition[1356]: INFO : PUT result: OK Nov 1 00:22:06.117368 ignition[1356]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:22:06.119042 ignition[1356]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:22:06.119042 ignition[1356]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:22:06.169694 ignition[1356]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:22:06.170610 ignition[1356]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:22:06.170610 ignition[1356]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:22:06.170136 unknown[1356]: wrote ssh authorized keys file for user: core Nov 1 00:22:06.172687 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:22:06.172687 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 00:22:06.245816 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:22:06.429218 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:22:06.430339 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:22:06.430339 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:22:06.430339 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:22:06.430339 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:22:06.430339 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:22:06.430339 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:22:06.430339 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:22:06.430339 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:22:06.436229 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:22:06.436229 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:22:06.436229 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:22:06.436229 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:22:06.436229 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:22:06.436229 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 1 00:22:06.793039 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 00:22:07.728718 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:22:07.728718 ignition[1356]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 00:22:07.731889 ignition[1356]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:22:07.731889 ignition[1356]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:22:07.731889 ignition[1356]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 00:22:07.731889 ignition[1356]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:22:07.731889 ignition[1356]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:22:07.731889 ignition[1356]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:22:07.731889 ignition[1356]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:22:07.731889 ignition[1356]: INFO : files: files passed Nov 1 00:22:07.731889 ignition[1356]: INFO : Ignition finished successfully Nov 1 00:22:07.733320 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:22:07.744119 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:22:07.747260 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:22:07.751846 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:22:07.752768 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:22:07.771194 initrd-setup-root-after-ignition[1385]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:22:07.773067 initrd-setup-root-after-ignition[1385]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:22:07.774252 initrd-setup-root-after-ignition[1389]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:22:07.774687 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:22:07.776485 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:22:07.782935 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:22:07.814108 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:22:07.814272 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:22:07.815527 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:22:07.816968 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:22:07.817969 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:22:07.822973 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:22:07.837710 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:22:07.842968 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:22:07.856387 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:22:07.857201 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:22:07.858219 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:22:07.859086 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:22:07.859273 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:22:07.860582 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:22:07.861453 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:22:07.862246 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:22:07.863024 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:22:07.863877 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:22:07.864691 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:22:07.865467 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:22:07.866262 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:22:07.867412 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:22:07.868280 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:22:07.869009 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:22:07.869194 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:22:07.870273 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:22:07.871066 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:22:07.871898 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:22:07.872046 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:22:07.872650 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:22:07.872853 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:22:07.874216 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:22:07.874399 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:22:07.875118 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:22:07.875276 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:22:07.883392 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:22:07.886149 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:22:07.886827 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:22:07.887923 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:22:07.891182 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:22:07.891395 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:22:07.898808 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:22:07.898939 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:22:07.910579 ignition[1409]: INFO : Ignition 2.19.0 Nov 1 00:22:07.910579 ignition[1409]: INFO : Stage: umount Nov 1 00:22:07.912527 ignition[1409]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:07.912527 ignition[1409]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:22:07.912527 ignition[1409]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:22:07.912527 ignition[1409]: INFO : PUT result: OK Nov 1 00:22:07.916770 ignition[1409]: INFO : umount: umount passed Nov 1 00:22:07.916770 ignition[1409]: INFO : Ignition finished successfully Nov 1 00:22:07.917553 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:22:07.917710 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:22:07.920271 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:22:07.920396 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:22:07.920992 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:22:07.921051 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:22:07.922024 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:22:07.922070 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 1 00:22:07.922857 systemd[1]: Stopped target network.target - Network. Nov 1 00:22:07.923686 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:22:07.923745 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:22:07.924831 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:22:07.925828 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:22:07.927809 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:22:07.928489 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:22:07.928944 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:22:07.929393 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:22:07.929440 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:22:07.929727 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:22:07.930821 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:22:07.931349 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:22:07.931414 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:22:07.932530 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:22:07.932588 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:22:07.935483 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:22:07.936532 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:22:07.939195 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:22:07.939876 systemd-networkd[1167]: eth0: DHCPv6 lease lost Nov 1 00:22:07.940120 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:22:07.940230 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:22:07.941400 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:22:07.941505 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:22:07.942672 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:22:07.942829 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:22:07.944392 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:22:07.944470 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:22:07.950947 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:22:07.952336 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:22:07.952409 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:22:07.953910 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:22:07.956125 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:22:07.956584 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:22:07.965347 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:22:07.965446 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:22:07.966400 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:22:07.966463 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:22:07.967091 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:22:07.967150 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:22:07.974178 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:22:07.975102 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:22:07.977281 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:22:07.977365 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:22:07.978573 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:22:07.978665 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:22:07.979423 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:22:07.979471 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:22:07.980388 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:22:07.980452 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:22:07.981494 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:22:07.981554 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:22:07.982584 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:22:07.982642 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:22:07.989967 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:22:07.990588 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:22:07.990671 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:22:07.991364 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 1 00:22:07.991426 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:22:07.992226 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:22:07.992281 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:22:07.993067 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:22:07.993124 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:22:08.000300 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:22:08.000435 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:22:08.001701 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:22:08.010987 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:22:08.019040 systemd[1]: Switching root. Nov 1 00:22:08.061604 systemd-journald[178]: Journal stopped Nov 1 00:22:09.888158 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Nov 1 00:22:09.888261 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:22:09.888291 kernel: SELinux: policy capability open_perms=1 Nov 1 00:22:09.888312 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:22:09.888331 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:22:09.888355 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:22:09.888375 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:22:09.888394 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:22:09.888409 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:22:09.888426 kernel: audit: type=1403 audit(1761956528.592:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:22:09.888452 systemd[1]: Successfully loaded SELinux policy in 84.931ms. Nov 1 00:22:09.888489 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.775ms. Nov 1 00:22:09.888511 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:22:09.888532 systemd[1]: Detected virtualization amazon. Nov 1 00:22:09.888552 systemd[1]: Detected architecture x86-64. Nov 1 00:22:09.888576 systemd[1]: Detected first boot. Nov 1 00:22:09.888596 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:22:09.888615 zram_generator::config[1452]: No configuration found. Nov 1 00:22:09.888636 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:22:09.888659 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:22:09.888678 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 1 00:22:09.888698 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:22:09.888719 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 00:22:09.888744 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 00:22:09.888778 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 00:22:09.888797 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 00:22:09.888818 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 00:22:09.888842 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 00:22:09.888861 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 00:22:09.888882 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 00:22:09.888905 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:22:09.888932 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:22:09.888951 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 00:22:09.888971 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 00:22:09.888990 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 00:22:09.889009 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:22:09.889032 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 00:22:09.889052 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:22:09.889072 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 1 00:22:09.889092 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 1 00:22:09.889111 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 1 00:22:09.889131 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 00:22:09.889150 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:22:09.889169 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:22:09.889192 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:22:09.889211 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:22:09.889230 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 00:22:09.889249 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 00:22:09.889269 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:22:09.889288 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:22:09.889307 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:22:09.889328 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 00:22:09.889348 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 00:22:09.889370 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 00:22:09.889389 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 00:22:09.889409 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:22:09.889429 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 00:22:09.889448 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 00:22:09.889468 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 00:22:09.889492 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:22:09.889512 systemd[1]: Reached target machines.target - Containers. Nov 1 00:22:09.889532 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 00:22:09.889555 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:22:09.889574 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:22:09.889593 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 00:22:09.889613 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:22:09.889632 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:22:09.889652 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:22:09.889671 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 00:22:09.889691 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:22:09.889714 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:22:09.889735 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:22:09.892318 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 1 00:22:09.892363 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:22:09.892388 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:22:09.892415 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:22:09.892439 kernel: fuse: init (API version 7.39) Nov 1 00:22:09.892460 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:22:09.892479 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 00:22:09.892507 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 00:22:09.892526 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:22:09.892547 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 00:22:09.892566 systemd[1]: Stopped verity-setup.service. Nov 1 00:22:09.892586 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:22:09.892604 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 00:22:09.892623 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 00:22:09.892650 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 00:22:09.892670 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 00:22:09.892697 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 00:22:09.892719 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 00:22:09.892741 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:22:09.894228 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:22:09.894270 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 00:22:09.894292 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:22:09.894314 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:22:09.894336 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:22:09.894362 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:22:09.894384 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:22:09.894409 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 00:22:09.894430 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 00:22:09.894452 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 00:22:09.894474 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 00:22:09.894496 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 00:22:09.894557 systemd-journald[1537]: Collecting audit messages is disabled. Nov 1 00:22:09.894601 kernel: loop: module loaded Nov 1 00:22:09.894627 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 00:22:09.894649 kernel: ACPI: bus type drm_connector registered Nov 1 00:22:09.894673 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:22:09.894697 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:22:09.894719 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 1 00:22:09.894741 systemd-journald[1537]: Journal started Nov 1 00:22:09.894802 systemd-journald[1537]: Runtime Journal (/run/log/journal/ec2963bc2a33d6ddab01777e23c188ba) is 4.7M, max 38.2M, 33.4M free. Nov 1 00:22:09.447342 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:22:09.484606 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 1 00:22:09.485070 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:22:09.911775 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 00:22:09.921782 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 00:22:09.927782 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:22:09.939781 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 00:22:09.945923 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:22:09.955821 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 00:22:09.976866 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 00:22:09.990794 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:22:09.996794 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:22:10.000415 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 00:22:10.001595 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:22:10.001868 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:22:10.003234 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:22:10.003420 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:22:10.016850 kernel: loop0: detected capacity change from 0 to 61336 Nov 1 00:22:10.009219 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:22:10.010273 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 00:22:10.011206 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 00:22:10.012108 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 00:22:10.013376 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 00:22:10.018256 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:22:10.049523 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 00:22:10.061094 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 00:22:10.070016 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 1 00:22:10.071543 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:22:10.073598 systemd-tmpfiles[1563]: ACLs are not supported, ignoring. Nov 1 00:22:10.073623 systemd-tmpfiles[1563]: ACLs are not supported, ignoring. Nov 1 00:22:10.074575 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:22:10.082061 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 1 00:22:10.083384 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:22:10.093990 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 00:22:10.120240 systemd-journald[1537]: Time spent on flushing to /var/log/journal/ec2963bc2a33d6ddab01777e23c188ba is 98.096ms for 994 entries. Nov 1 00:22:10.120240 systemd-journald[1537]: System Journal (/var/log/journal/ec2963bc2a33d6ddab01777e23c188ba) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:22:10.251352 systemd-journald[1537]: Received client request to flush runtime journal. Nov 1 00:22:10.251437 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:22:10.251550 kernel: loop1: detected capacity change from 0 to 140768 Nov 1 00:22:10.175361 udevadm[1590]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 00:22:10.199238 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:22:10.214385 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 00:22:10.229030 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:22:10.263520 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:22:10.266870 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 00:22:10.269116 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 1 00:22:10.295977 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Nov 1 00:22:10.296409 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Nov 1 00:22:10.303148 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:22:10.343344 kernel: loop2: detected capacity change from 0 to 142488 Nov 1 00:22:10.474146 kernel: loop3: detected capacity change from 0 to 219144 Nov 1 00:22:10.550795 kernel: loop4: detected capacity change from 0 to 61336 Nov 1 00:22:10.562795 kernel: loop5: detected capacity change from 0 to 140768 Nov 1 00:22:10.584784 kernel: loop6: detected capacity change from 0 to 142488 Nov 1 00:22:10.611793 kernel: loop7: detected capacity change from 0 to 219144 Nov 1 00:22:10.635853 (sd-merge)[1609]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Nov 1 00:22:10.637008 (sd-merge)[1609]: Merged extensions into '/usr'. Nov 1 00:22:10.642744 systemd[1]: Reloading requested from client PID 1562 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 00:22:10.642775 systemd[1]: Reloading... Nov 1 00:22:10.781783 zram_generator::config[1635]: No configuration found. Nov 1 00:22:11.014976 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:22:11.100908 systemd[1]: Reloading finished in 457 ms. Nov 1 00:22:11.135442 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 00:22:11.136432 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 00:22:11.145956 systemd[1]: Starting ensure-sysext.service... Nov 1 00:22:11.149388 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:22:11.159066 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:22:11.170901 systemd[1]: Reloading requested from client PID 1687 ('systemctl') (unit ensure-sysext.service)... Nov 1 00:22:11.170919 systemd[1]: Reloading... Nov 1 00:22:11.191390 systemd-tmpfiles[1688]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:22:11.194363 systemd-tmpfiles[1688]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 00:22:11.197062 systemd-tmpfiles[1688]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:22:11.197663 systemd-tmpfiles[1688]: ACLs are not supported, ignoring. Nov 1 00:22:11.197867 systemd-tmpfiles[1688]: ACLs are not supported, ignoring. Nov 1 00:22:11.219799 systemd-udevd[1689]: Using default interface naming scheme 'v255'. Nov 1 00:22:11.222945 systemd-tmpfiles[1688]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:22:11.222961 systemd-tmpfiles[1688]: Skipping /boot Nov 1 00:22:11.236809 systemd-tmpfiles[1688]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:22:11.236824 systemd-tmpfiles[1688]: Skipping /boot Nov 1 00:22:11.315775 zram_generator::config[1717]: No configuration found. Nov 1 00:22:11.405914 (udev-worker)[1731]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:22:11.557877 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Nov 1 00:22:11.567299 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 1 00:22:11.600638 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:22:11.600742 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Nov 1 00:22:11.602780 kernel: ACPI: button: Sleep Button [SLPF] Nov 1 00:22:11.603403 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:22:11.634887 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Nov 1 00:22:11.696775 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:22:11.746926 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1745) Nov 1 00:22:11.790995 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 1 00:22:11.791848 systemd[1]: Reloading finished in 620 ms. Nov 1 00:22:11.796768 ldconfig[1559]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:22:11.822212 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:22:11.824666 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 00:22:11.826940 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:22:11.921222 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 1 00:22:11.923223 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 1 00:22:11.935627 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:22:11.944410 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:22:11.950123 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 00:22:11.951017 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:22:11.955873 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 1 00:22:11.960160 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:22:11.964113 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:22:11.974243 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:22:11.975078 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:22:11.978100 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 00:22:11.985056 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 00:22:11.995119 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:22:12.007110 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:22:12.016467 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 00:22:12.025509 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:22:12.026634 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:22:12.031839 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:22:12.038993 lvm[1884]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:22:12.032080 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:22:12.034508 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:22:12.034690 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:22:12.036566 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:22:12.037345 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:22:12.058018 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:22:12.058851 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:22:12.067193 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:22:12.075102 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:22:12.078547 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:22:12.087015 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:22:12.087963 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:22:12.088258 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 00:22:12.096086 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 00:22:12.096745 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:22:12.099478 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 1 00:22:12.105437 systemd[1]: Finished ensure-sysext.service. Nov 1 00:22:12.114247 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:22:12.125042 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 1 00:22:12.126104 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 00:22:12.139310 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 00:22:12.150479 lvm[1917]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:22:12.151015 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 00:22:12.163857 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:22:12.164579 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:22:12.165799 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 00:22:12.166812 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:22:12.167004 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:22:12.172725 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:22:12.184801 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:22:12.185358 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:22:12.186641 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:22:12.187952 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:22:12.191241 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:22:12.204392 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 00:22:12.205749 augenrules[1928]: No rules Nov 1 00:22:12.206717 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:22:12.219893 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 1 00:22:12.220965 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 00:22:12.223179 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:22:12.247345 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 00:22:12.284569 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:22:12.313638 systemd-networkd[1895]: lo: Link UP Nov 1 00:22:12.313647 systemd-networkd[1895]: lo: Gained carrier Nov 1 00:22:12.315301 systemd-networkd[1895]: Enumeration completed Nov 1 00:22:12.315427 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:22:12.316513 systemd-networkd[1895]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:22:12.316521 systemd-networkd[1895]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:22:12.322107 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 00:22:12.325172 systemd-networkd[1895]: eth0: Link UP Nov 1 00:22:12.325347 systemd-networkd[1895]: eth0: Gained carrier Nov 1 00:22:12.325369 systemd-networkd[1895]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:22:12.335501 systemd-resolved[1896]: Positive Trust Anchors: Nov 1 00:22:12.335814 systemd-resolved[1896]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:22:12.335876 systemd-resolved[1896]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:22:12.336847 systemd-networkd[1895]: eth0: DHCPv4 address 172.31.30.202/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 1 00:22:12.355951 systemd-resolved[1896]: Defaulting to hostname 'linux'. Nov 1 00:22:12.358315 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:22:12.358934 systemd[1]: Reached target network.target - Network. Nov 1 00:22:12.359391 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:22:12.359964 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:22:12.360471 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 00:22:12.360953 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 00:22:12.361532 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 00:22:12.362038 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 00:22:12.362395 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 00:22:12.362793 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:22:12.362839 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:22:12.363213 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:22:12.364262 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 00:22:12.366214 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 00:22:12.372436 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 00:22:12.373588 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 00:22:12.374150 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:22:12.374570 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:22:12.375024 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:22:12.375066 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:22:12.376568 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 00:22:12.380962 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 1 00:22:12.386994 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 00:22:12.389995 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 00:22:12.392926 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 00:22:12.393583 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 00:22:12.396005 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 00:22:12.401078 systemd[1]: Started ntpd.service - Network Time Service. Nov 1 00:22:12.406909 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 00:22:12.410118 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 1 00:22:12.420652 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 00:22:12.423551 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 00:22:12.438237 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 00:22:12.439968 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:22:12.440673 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:22:12.450976 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 00:22:12.461665 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 00:22:12.502676 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:22:12.503831 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 00:22:12.538280 jq[1954]: false Nov 1 00:22:12.548116 extend-filesystems[1955]: Found loop4 Nov 1 00:22:12.548116 extend-filesystems[1955]: Found loop5 Nov 1 00:22:12.554884 extend-filesystems[1955]: Found loop6 Nov 1 00:22:12.554884 extend-filesystems[1955]: Found loop7 Nov 1 00:22:12.554884 extend-filesystems[1955]: Found nvme0n1 Nov 1 00:22:12.554884 extend-filesystems[1955]: Found nvme0n1p1 Nov 1 00:22:12.554884 extend-filesystems[1955]: Found nvme0n1p2 Nov 1 00:22:12.554884 extend-filesystems[1955]: Found nvme0n1p3 Nov 1 00:22:12.554884 extend-filesystems[1955]: Found usr Nov 1 00:22:12.554884 extend-filesystems[1955]: Found nvme0n1p4 Nov 1 00:22:12.554884 extend-filesystems[1955]: Found nvme0n1p6 Nov 1 00:22:12.554884 extend-filesystems[1955]: Found nvme0n1p7 Nov 1 00:22:12.554884 extend-filesystems[1955]: Found nvme0n1p9 Nov 1 00:22:12.554884 extend-filesystems[1955]: Checking size of /dev/nvme0n1p9 Nov 1 00:22:12.555566 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:22:12.560141 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 00:22:12.574959 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:22:12.575230 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 00:22:12.592127 ntpd[1957]: ntpd 4.2.8p17@1.4004-o Fri Oct 31 22:05:56 UTC 2025 (1): Starting Nov 1 00:22:12.595666 jq[1966]: true Nov 1 00:22:12.595954 ntpd[1957]: 1 Nov 00:22:12 ntpd[1957]: ntpd 4.2.8p17@1.4004-o Fri Oct 31 22:05:56 UTC 2025 (1): Starting Nov 1 00:22:12.595954 ntpd[1957]: 1 Nov 00:22:12 ntpd[1957]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 1 00:22:12.595954 ntpd[1957]: 1 Nov 00:22:12 ntpd[1957]: ---------------------------------------------------- Nov 1 00:22:12.595954 ntpd[1957]: 1 Nov 00:22:12 ntpd[1957]: ntp-4 is maintained by Network Time Foundation, Nov 1 00:22:12.595954 ntpd[1957]: 1 Nov 00:22:12 ntpd[1957]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 1 00:22:12.595954 ntpd[1957]: 1 Nov 00:22:12 ntpd[1957]: corporation. Support and training for ntp-4 are Nov 1 00:22:12.595954 ntpd[1957]: 1 Nov 00:22:12 ntpd[1957]: available at https://www.nwtime.org/support Nov 1 00:22:12.595954 ntpd[1957]: 1 Nov 00:22:12 ntpd[1957]: ---------------------------------------------------- Nov 1 00:22:12.595240 (ntainerd)[1985]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 00:22:12.592163 ntpd[1957]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 1 00:22:12.592175 ntpd[1957]: ---------------------------------------------------- Nov 1 00:22:12.602312 ntpd[1957]: 1 Nov 00:22:12 ntpd[1957]: proto: precision = 0.072 usec (-24) Nov 1 00:22:12.602367 tar[1976]: linux-amd64/LICENSE Nov 1 00:22:12.592184 ntpd[1957]: ntp-4 is maintained by Network Time Foundation, Nov 1 00:22:12.592194 ntpd[1957]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 1 00:22:12.592203 ntpd[1957]: corporation. Support and training for ntp-4 are Nov 1 00:22:12.592212 ntpd[1957]: available at https://www.nwtime.org/support Nov 1 00:22:12.592222 ntpd[1957]: ---------------------------------------------------- Nov 1 00:22:12.599387 ntpd[1957]: proto: precision = 0.072 usec (-24) Nov 1 00:22:12.603110 ntpd[1957]: basedate set to 2025-10-19 Nov 1 00:22:12.608454 tar[1976]: linux-amd64/helm Nov 1 00:22:12.608505 ntpd[1957]: 1 Nov 00:22:12 ntpd[1957]: basedate set to 2025-10-19 Nov 1 00:22:12.608505 ntpd[1957]: 1 Nov 00:22:12 ntpd[1957]: gps base set to 2025-10-19 (week 2389) Nov 1 00:22:12.603135 ntpd[1957]: gps base set to 2025-10-19 (week 2389) Nov 1 00:22:12.610466 ntpd[1957]: Listen and drop on 0 v6wildcard [::]:123 Nov 1 00:22:12.613466 ntpd[1957]: 1 Nov 00:22:12 ntpd[1957]: Listen and drop on 0 v6wildcard [::]:123 Nov 1 00:22:12.613466 ntpd[1957]: 1 Nov 00:22:12 ntpd[1957]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 1 00:22:12.613466 ntpd[1957]: 1 Nov 00:22:12 ntpd[1957]: Listen normally on 2 lo 127.0.0.1:123 Nov 1 00:22:12.613466 ntpd[1957]: 1 Nov 00:22:12 ntpd[1957]: Listen normally on 3 eth0 172.31.30.202:123 Nov 1 00:22:12.613466 ntpd[1957]: 1 Nov 00:22:12 ntpd[1957]: Listen normally on 4 lo [::1]:123 Nov 1 00:22:12.612977 ntpd[1957]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 1 00:22:12.613274 ntpd[1957]: Listen normally on 2 lo 127.0.0.1:123 Nov 1 00:22:12.613317 ntpd[1957]: Listen normally on 3 eth0 172.31.30.202:123 Nov 1 00:22:12.613360 ntpd[1957]: Listen normally on 4 lo [::1]:123 Nov 1 00:22:12.614372 ntpd[1957]: bind(21) AF_INET6 fe80::45c:2dff:fe5d:87a7%2#123 flags 0x11 failed: Cannot assign requested address Nov 1 00:22:12.617664 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 00:22:12.618160 ntpd[1957]: 1 Nov 00:22:12 ntpd[1957]: bind(21) AF_INET6 fe80::45c:2dff:fe5d:87a7%2#123 flags 0x11 failed: Cannot assign requested address Nov 1 00:22:12.618160 ntpd[1957]: 1 Nov 00:22:12 ntpd[1957]: unable to create socket on eth0 (5) for fe80::45c:2dff:fe5d:87a7%2#123 Nov 1 00:22:12.618160 ntpd[1957]: 1 Nov 00:22:12 ntpd[1957]: failed to init interface for address fe80::45c:2dff:fe5d:87a7%2 Nov 1 00:22:12.618160 ntpd[1957]: 1 Nov 00:22:12 ntpd[1957]: Listening on routing socket on fd #21 for interface updates Nov 1 00:22:12.614407 ntpd[1957]: unable to create socket on eth0 (5) for fe80::45c:2dff:fe5d:87a7%2#123 Nov 1 00:22:12.614422 ntpd[1957]: failed to init interface for address fe80::45c:2dff:fe5d:87a7%2 Nov 1 00:22:12.614464 ntpd[1957]: Listening on routing socket on fd #21 for interface updates Nov 1 00:22:12.617438 dbus-daemon[1953]: [system] SELinux support is enabled Nov 1 00:22:12.626106 update_engine[1964]: I20251101 00:22:12.625205 1964 main.cc:92] Flatcar Update Engine starting Nov 1 00:22:12.630780 extend-filesystems[1955]: Resized partition /dev/nvme0n1p9 Nov 1 00:22:12.631310 ntpd[1957]: 1 Nov 00:22:12 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 1 00:22:12.631310 ntpd[1957]: 1 Nov 00:22:12 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 1 00:22:12.629942 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 1 00:22:12.629976 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 1 00:22:12.633615 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:22:12.633671 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 00:22:12.636679 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:22:12.636712 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 00:22:12.638988 dbus-daemon[1953]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1895 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 1 00:22:12.646458 extend-filesystems[2002]: resize2fs 1.47.1 (20-May-2024) Nov 1 00:22:12.647548 dbus-daemon[1953]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 00:22:12.661771 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 1 00:22:12.662787 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Nov 1 00:22:12.672355 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 1 00:22:12.677349 systemd[1]: Started update-engine.service - Update Engine. Nov 1 00:22:12.678051 update_engine[1964]: I20251101 00:22:12.677990 1964 update_check_scheduler.cc:74] Next update check in 8m41s Nov 1 00:22:12.681984 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 00:22:12.698982 jq[1993]: true Nov 1 00:22:12.716688 systemd-logind[1963]: Watching system buttons on /dev/input/event1 (Power Button) Nov 1 00:22:12.716716 systemd-logind[1963]: Watching system buttons on /dev/input/event2 (Sleep Button) Nov 1 00:22:12.716740 systemd-logind[1963]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:22:12.717157 systemd-logind[1963]: New seat seat0. Nov 1 00:22:12.721479 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 00:22:12.793784 coreos-metadata[1952]: Nov 01 00:22:12.777 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 1 00:22:12.793784 coreos-metadata[1952]: Nov 01 00:22:12.778 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 1 00:22:12.793784 coreos-metadata[1952]: Nov 01 00:22:12.779 INFO Fetch successful Nov 1 00:22:12.793784 coreos-metadata[1952]: Nov 01 00:22:12.779 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 1 00:22:12.793784 coreos-metadata[1952]: Nov 01 00:22:12.780 INFO Fetch successful Nov 1 00:22:12.793784 coreos-metadata[1952]: Nov 01 00:22:12.780 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 1 00:22:12.793784 coreos-metadata[1952]: Nov 01 00:22:12.781 INFO Fetch successful Nov 1 00:22:12.793784 coreos-metadata[1952]: Nov 01 00:22:12.781 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 1 00:22:12.793784 coreos-metadata[1952]: Nov 01 00:22:12.792 INFO Fetch successful Nov 1 00:22:12.793784 coreos-metadata[1952]: Nov 01 00:22:12.792 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 1 00:22:12.798655 coreos-metadata[1952]: Nov 01 00:22:12.797 INFO Fetch failed with 404: resource not found Nov 1 00:22:12.798655 coreos-metadata[1952]: Nov 01 00:22:12.798 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 1 00:22:12.806658 coreos-metadata[1952]: Nov 01 00:22:12.801 INFO Fetch successful Nov 1 00:22:12.806658 coreos-metadata[1952]: Nov 01 00:22:12.801 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 1 00:22:12.806658 coreos-metadata[1952]: Nov 01 00:22:12.806 INFO Fetch successful Nov 1 00:22:12.806658 coreos-metadata[1952]: Nov 01 00:22:12.806 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 1 00:22:12.809980 coreos-metadata[1952]: Nov 01 00:22:12.807 INFO Fetch successful Nov 1 00:22:12.809980 coreos-metadata[1952]: Nov 01 00:22:12.807 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 1 00:22:12.811895 coreos-metadata[1952]: Nov 01 00:22:12.811 INFO Fetch successful Nov 1 00:22:12.811895 coreos-metadata[1952]: Nov 01 00:22:12.811 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 1 00:22:12.819111 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Nov 1 00:22:12.821269 coreos-metadata[1952]: Nov 01 00:22:12.819 INFO Fetch successful Nov 1 00:22:12.850353 extend-filesystems[2002]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 1 00:22:12.850353 extend-filesystems[2002]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 1 00:22:12.850353 extend-filesystems[2002]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Nov 1 00:22:12.859861 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1745) Nov 1 00:22:12.850765 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:22:12.859996 extend-filesystems[1955]: Resized filesystem in /dev/nvme0n1p9 Nov 1 00:22:12.851037 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 00:22:12.920827 bash[2032]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:22:12.926162 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 00:22:12.944146 systemd[1]: Starting sshkeys.service... Nov 1 00:22:12.946442 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 1 00:22:12.952770 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 00:22:13.061887 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 1 00:22:13.071452 dbus-daemon[1953]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 1 00:22:13.074239 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 1 00:22:13.078249 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 1 00:22:13.088188 dbus-daemon[1953]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2003 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 1 00:22:13.103205 systemd[1]: Starting polkit.service - Authorization Manager... Nov 1 00:22:13.219102 polkitd[2104]: Started polkitd version 121 Nov 1 00:22:13.263432 polkitd[2104]: Loading rules from directory /etc/polkit-1/rules.d Nov 1 00:22:13.263514 polkitd[2104]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 1 00:22:13.273255 polkitd[2104]: Finished loading, compiling and executing 2 rules Nov 1 00:22:13.275114 dbus-daemon[1953]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 1 00:22:13.275820 systemd[1]: Started polkit.service - Authorization Manager. Nov 1 00:22:13.276543 polkitd[2104]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 1 00:22:13.277460 locksmithd[2005]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:22:13.336373 systemd-resolved[1896]: System hostname changed to 'ip-172-31-30-202'. Nov 1 00:22:13.336678 systemd-hostnamed[2003]: Hostname set to (transient) Nov 1 00:22:13.370201 coreos-metadata[2082]: Nov 01 00:22:13.370 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 1 00:22:13.371317 coreos-metadata[2082]: Nov 01 00:22:13.371 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 1 00:22:13.375162 coreos-metadata[2082]: Nov 01 00:22:13.375 INFO Fetch successful Nov 1 00:22:13.375334 coreos-metadata[2082]: Nov 01 00:22:13.375 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 1 00:22:13.377846 coreos-metadata[2082]: Nov 01 00:22:13.377 INFO Fetch successful Nov 1 00:22:13.381310 unknown[2082]: wrote ssh authorized keys file for user: core Nov 1 00:22:13.434023 update-ssh-keys[2150]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:22:13.435535 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 1 00:22:13.446962 systemd[1]: Finished sshkeys.service. Nov 1 00:22:13.495628 containerd[1985]: time="2025-11-01T00:22:13.495473503Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 1 00:22:13.592645 ntpd[1957]: bind(24) AF_INET6 fe80::45c:2dff:fe5d:87a7%2#123 flags 0x11 failed: Cannot assign requested address Nov 1 00:22:13.592696 ntpd[1957]: unable to create socket on eth0 (6) for fe80::45c:2dff:fe5d:87a7%2#123 Nov 1 00:22:13.593082 ntpd[1957]: 1 Nov 00:22:13 ntpd[1957]: bind(24) AF_INET6 fe80::45c:2dff:fe5d:87a7%2#123 flags 0x11 failed: Cannot assign requested address Nov 1 00:22:13.593082 ntpd[1957]: 1 Nov 00:22:13 ntpd[1957]: unable to create socket on eth0 (6) for fe80::45c:2dff:fe5d:87a7%2#123 Nov 1 00:22:13.593082 ntpd[1957]: 1 Nov 00:22:13 ntpd[1957]: failed to init interface for address fe80::45c:2dff:fe5d:87a7%2 Nov 1 00:22:13.592711 ntpd[1957]: failed to init interface for address fe80::45c:2dff:fe5d:87a7%2 Nov 1 00:22:13.607054 containerd[1985]: time="2025-11-01T00:22:13.606787927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:13.611192 containerd[1985]: time="2025-11-01T00:22:13.611134565Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:22:13.611343 containerd[1985]: time="2025-11-01T00:22:13.611324629Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:22:13.611434 containerd[1985]: time="2025-11-01T00:22:13.611419599Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:22:13.611693 containerd[1985]: time="2025-11-01T00:22:13.611671516Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 1 00:22:13.614800 containerd[1985]: time="2025-11-01T00:22:13.613799894Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:13.614800 containerd[1985]: time="2025-11-01T00:22:13.613939085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:22:13.614800 containerd[1985]: time="2025-11-01T00:22:13.613962899Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:13.614800 containerd[1985]: time="2025-11-01T00:22:13.614206919Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:22:13.614800 containerd[1985]: time="2025-11-01T00:22:13.614229074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:13.614800 containerd[1985]: time="2025-11-01T00:22:13.614249464Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:22:13.614800 containerd[1985]: time="2025-11-01T00:22:13.614265000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:13.614800 containerd[1985]: time="2025-11-01T00:22:13.614358533Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:13.614800 containerd[1985]: time="2025-11-01T00:22:13.614617557Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:13.615581 containerd[1985]: time="2025-11-01T00:22:13.615230765Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:22:13.615581 containerd[1985]: time="2025-11-01T00:22:13.615260850Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:22:13.615581 containerd[1985]: time="2025-11-01T00:22:13.615371702Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:22:13.615581 containerd[1985]: time="2025-11-01T00:22:13.615430991Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:22:13.623773 containerd[1985]: time="2025-11-01T00:22:13.621885855Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:22:13.623773 containerd[1985]: time="2025-11-01T00:22:13.621961457Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:22:13.623773 containerd[1985]: time="2025-11-01T00:22:13.621997492Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 1 00:22:13.623773 containerd[1985]: time="2025-11-01T00:22:13.622024152Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 1 00:22:13.623773 containerd[1985]: time="2025-11-01T00:22:13.622046468Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:22:13.623773 containerd[1985]: time="2025-11-01T00:22:13.622227671Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:22:13.623773 containerd[1985]: time="2025-11-01T00:22:13.622563326Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:22:13.623773 containerd[1985]: time="2025-11-01T00:22:13.622697994Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 1 00:22:13.623773 containerd[1985]: time="2025-11-01T00:22:13.622718548Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 1 00:22:13.623773 containerd[1985]: time="2025-11-01T00:22:13.622739030Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 1 00:22:13.623773 containerd[1985]: time="2025-11-01T00:22:13.622781101Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:22:13.623773 containerd[1985]: time="2025-11-01T00:22:13.622801675Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:22:13.623773 containerd[1985]: time="2025-11-01T00:22:13.622822897Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:22:13.623773 containerd[1985]: time="2025-11-01T00:22:13.622845088Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:22:13.624345 containerd[1985]: time="2025-11-01T00:22:13.622867642Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:22:13.624345 containerd[1985]: time="2025-11-01T00:22:13.622890008Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:22:13.624345 containerd[1985]: time="2025-11-01T00:22:13.622908993Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:22:13.624345 containerd[1985]: time="2025-11-01T00:22:13.622928899Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:22:13.624345 containerd[1985]: time="2025-11-01T00:22:13.622957324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:22:13.624345 containerd[1985]: time="2025-11-01T00:22:13.622977305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:22:13.624345 containerd[1985]: time="2025-11-01T00:22:13.622995723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:22:13.624345 containerd[1985]: time="2025-11-01T00:22:13.623015674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:22:13.624345 containerd[1985]: time="2025-11-01T00:22:13.623047970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:22:13.624345 containerd[1985]: time="2025-11-01T00:22:13.623068351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:22:13.624345 containerd[1985]: time="2025-11-01T00:22:13.623085754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:22:13.624345 containerd[1985]: time="2025-11-01T00:22:13.623105841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:22:13.624345 containerd[1985]: time="2025-11-01T00:22:13.623124581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 1 00:22:13.624345 containerd[1985]: time="2025-11-01T00:22:13.623146311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 1 00:22:13.624871 containerd[1985]: time="2025-11-01T00:22:13.623163209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:22:13.624871 containerd[1985]: time="2025-11-01T00:22:13.623181844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 1 00:22:13.624871 containerd[1985]: time="2025-11-01T00:22:13.623202037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:22:13.624871 containerd[1985]: time="2025-11-01T00:22:13.623227927Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 1 00:22:13.624871 containerd[1985]: time="2025-11-01T00:22:13.623260778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 1 00:22:13.624871 containerd[1985]: time="2025-11-01T00:22:13.623285484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:22:13.624871 containerd[1985]: time="2025-11-01T00:22:13.623303331Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:22:13.624871 containerd[1985]: time="2025-11-01T00:22:13.623374183Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:22:13.624871 containerd[1985]: time="2025-11-01T00:22:13.623400589Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 1 00:22:13.624871 containerd[1985]: time="2025-11-01T00:22:13.623418996Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:22:13.624871 containerd[1985]: time="2025-11-01T00:22:13.623438325Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 1 00:22:13.624871 containerd[1985]: time="2025-11-01T00:22:13.623453319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:22:13.624871 containerd[1985]: time="2025-11-01T00:22:13.623470290Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 1 00:22:13.624871 containerd[1985]: time="2025-11-01T00:22:13.623483690Z" level=info msg="NRI interface is disabled by configuration." Nov 1 00:22:13.625434 containerd[1985]: time="2025-11-01T00:22:13.623497722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:22:13.630796 containerd[1985]: time="2025-11-01T00:22:13.628021143Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:22:13.630796 containerd[1985]: time="2025-11-01T00:22:13.628129471Z" level=info msg="Connect containerd service" Nov 1 00:22:13.630796 containerd[1985]: time="2025-11-01T00:22:13.628190223Z" level=info msg="using legacy CRI server" Nov 1 00:22:13.630796 containerd[1985]: time="2025-11-01T00:22:13.628201524Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 00:22:13.630796 containerd[1985]: time="2025-11-01T00:22:13.628335996Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:22:13.632065 containerd[1985]: time="2025-11-01T00:22:13.632022347Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:22:13.634219 containerd[1985]: time="2025-11-01T00:22:13.634171739Z" level=info msg="Start subscribing containerd event" Nov 1 00:22:13.634290 containerd[1985]: time="2025-11-01T00:22:13.634241426Z" level=info msg="Start recovering state" Nov 1 00:22:13.634347 containerd[1985]: time="2025-11-01T00:22:13.634337022Z" level=info msg="Start event monitor" Nov 1 00:22:13.634387 containerd[1985]: time="2025-11-01T00:22:13.634358496Z" level=info msg="Start snapshots syncer" Nov 1 00:22:13.634387 containerd[1985]: time="2025-11-01T00:22:13.634372656Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:22:13.634452 containerd[1985]: time="2025-11-01T00:22:13.634383920Z" level=info msg="Start streaming server" Nov 1 00:22:13.634746 containerd[1985]: time="2025-11-01T00:22:13.634722076Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:22:13.634828 containerd[1985]: time="2025-11-01T00:22:13.634810777Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:22:13.634976 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 00:22:13.640707 containerd[1985]: time="2025-11-01T00:22:13.640666873Z" level=info msg="containerd successfully booted in 0.148242s" Nov 1 00:22:13.948775 tar[1976]: linux-amd64/README.md Nov 1 00:22:13.965372 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 00:22:14.019332 sshd_keygen[1998]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:22:14.043029 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 00:22:14.050115 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 00:22:14.057356 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:22:14.057623 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 00:22:14.060030 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 00:22:14.075930 systemd-networkd[1895]: eth0: Gained IPv6LL Nov 1 00:22:14.078860 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 00:22:14.080588 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 00:22:14.085087 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 1 00:22:14.088524 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:14.092879 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 00:22:14.093952 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 00:22:14.107735 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 00:22:14.110372 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 00:22:14.111333 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 00:22:14.158235 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 00:22:14.167108 amazon-ssm-agent[2174]: Initializing new seelog logger Nov 1 00:22:14.167461 amazon-ssm-agent[2174]: New Seelog Logger Creation Complete Nov 1 00:22:14.167461 amazon-ssm-agent[2174]: 2025/11/01 00:22:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 1 00:22:14.167461 amazon-ssm-agent[2174]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 1 00:22:14.167816 amazon-ssm-agent[2174]: 2025/11/01 00:22:14 processing appconfig overrides Nov 1 00:22:14.168807 amazon-ssm-agent[2174]: 2025/11/01 00:22:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 1 00:22:14.168807 amazon-ssm-agent[2174]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 1 00:22:14.168807 amazon-ssm-agent[2174]: 2025/11/01 00:22:14 processing appconfig overrides Nov 1 00:22:14.168807 amazon-ssm-agent[2174]: 2025/11/01 00:22:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 1 00:22:14.168807 amazon-ssm-agent[2174]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 1 00:22:14.168807 amazon-ssm-agent[2174]: 2025/11/01 00:22:14 processing appconfig overrides Nov 1 00:22:14.169032 amazon-ssm-agent[2174]: 2025-11-01 00:22:14 INFO Proxy environment variables: Nov 1 00:22:14.172021 amazon-ssm-agent[2174]: 2025/11/01 00:22:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 1 00:22:14.172021 amazon-ssm-agent[2174]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 1 00:22:14.172149 amazon-ssm-agent[2174]: 2025/11/01 00:22:14 processing appconfig overrides Nov 1 00:22:14.270480 amazon-ssm-agent[2174]: 2025-11-01 00:22:14 INFO https_proxy: Nov 1 00:22:14.368199 amazon-ssm-agent[2174]: 2025-11-01 00:22:14 INFO http_proxy: Nov 1 00:22:14.417286 amazon-ssm-agent[2174]: 2025-11-01 00:22:14 INFO no_proxy: Nov 1 00:22:14.417286 amazon-ssm-agent[2174]: 2025-11-01 00:22:14 INFO Checking if agent identity type OnPrem can be assumed Nov 1 00:22:14.417286 amazon-ssm-agent[2174]: 2025-11-01 00:22:14 INFO Checking if agent identity type EC2 can be assumed Nov 1 00:22:14.417286 amazon-ssm-agent[2174]: 2025-11-01 00:22:14 INFO Agent will take identity from EC2 Nov 1 00:22:14.417286 amazon-ssm-agent[2174]: 2025-11-01 00:22:14 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 1 00:22:14.417286 amazon-ssm-agent[2174]: 2025-11-01 00:22:14 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 1 00:22:14.417286 amazon-ssm-agent[2174]: 2025-11-01 00:22:14 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 1 00:22:14.417286 amazon-ssm-agent[2174]: 2025-11-01 00:22:14 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Nov 1 00:22:14.417286 amazon-ssm-agent[2174]: 2025-11-01 00:22:14 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Nov 1 00:22:14.417286 amazon-ssm-agent[2174]: 2025-11-01 00:22:14 INFO [amazon-ssm-agent] Starting Core Agent Nov 1 00:22:14.417286 amazon-ssm-agent[2174]: 2025-11-01 00:22:14 INFO [amazon-ssm-agent] registrar detected. Attempting registration Nov 1 00:22:14.417286 amazon-ssm-agent[2174]: 2025-11-01 00:22:14 INFO [Registrar] Starting registrar module Nov 1 00:22:14.417286 amazon-ssm-agent[2174]: 2025-11-01 00:22:14 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Nov 1 00:22:14.417286 amazon-ssm-agent[2174]: 2025-11-01 00:22:14 INFO [EC2Identity] EC2 registration was successful. Nov 1 00:22:14.417286 amazon-ssm-agent[2174]: 2025-11-01 00:22:14 INFO [CredentialRefresher] credentialRefresher has started Nov 1 00:22:14.417286 amazon-ssm-agent[2174]: 2025-11-01 00:22:14 INFO [CredentialRefresher] Starting credentials refresher loop Nov 1 00:22:14.417286 amazon-ssm-agent[2174]: 2025-11-01 00:22:14 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 1 00:22:14.466558 amazon-ssm-agent[2174]: 2025-11-01 00:22:14 INFO [CredentialRefresher] Next credential rotation will be in 30.31665953035 minutes Nov 1 00:22:15.135226 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 00:22:15.142364 systemd[1]: Started sshd@0-172.31.30.202:22-139.178.89.65:59588.service - OpenSSH per-connection server daemon (139.178.89.65:59588). Nov 1 00:22:15.382908 sshd[2196]: Accepted publickey for core from 139.178.89.65 port 59588 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:22:15.385091 sshd[2196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:15.393540 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 00:22:15.402085 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 00:22:15.405602 systemd-logind[1963]: New session 1 of user core. Nov 1 00:22:15.421229 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 00:22:15.429127 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 00:22:15.436492 (systemd)[2201]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:15.437098 amazon-ssm-agent[2174]: 2025-11-01 00:22:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 1 00:22:15.539194 amazon-ssm-agent[2174]: 2025-11-01 00:22:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2202) started Nov 1 00:22:15.626967 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:15.629621 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 00:22:15.636864 (kubelet)[2218]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:22:15.638209 systemd[2201]: Queued start job for default target default.target. Nov 1 00:22:15.640099 amazon-ssm-agent[2174]: 2025-11-01 00:22:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 1 00:22:15.640276 systemd[2201]: Created slice app.slice - User Application Slice. Nov 1 00:22:15.640313 systemd[2201]: Reached target paths.target - Paths. Nov 1 00:22:15.640333 systemd[2201]: Reached target timers.target - Timers. Nov 1 00:22:15.648114 systemd[2201]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 00:22:15.660768 systemd[2201]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 00:22:15.660942 systemd[2201]: Reached target sockets.target - Sockets. Nov 1 00:22:15.660964 systemd[2201]: Reached target basic.target - Basic System. Nov 1 00:22:15.661020 systemd[2201]: Reached target default.target - Main User Target. Nov 1 00:22:15.661060 systemd[2201]: Startup finished in 215ms. Nov 1 00:22:15.662954 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 00:22:15.672991 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 00:22:15.673799 systemd[1]: Startup finished in 592ms (kernel) + 7.850s (initrd) + 7.164s (userspace) = 15.608s. Nov 1 00:22:15.826894 systemd[1]: Started sshd@1-172.31.30.202:22-139.178.89.65:59592.service - OpenSSH per-connection server daemon (139.178.89.65:59592). Nov 1 00:22:15.981562 sshd[2237]: Accepted publickey for core from 139.178.89.65 port 59592 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:22:15.983277 sshd[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:15.989465 systemd-logind[1963]: New session 2 of user core. Nov 1 00:22:15.998009 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 00:22:16.116176 sshd[2237]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:16.120414 systemd[1]: sshd@1-172.31.30.202:22-139.178.89.65:59592.service: Deactivated successfully. Nov 1 00:22:16.122304 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:22:16.123073 systemd-logind[1963]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:22:16.124507 systemd-logind[1963]: Removed session 2. Nov 1 00:22:16.153214 systemd[1]: Started sshd@2-172.31.30.202:22-139.178.89.65:44160.service - OpenSSH per-connection server daemon (139.178.89.65:44160). Nov 1 00:22:16.305166 sshd[2245]: Accepted publickey for core from 139.178.89.65 port 44160 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:22:16.307295 sshd[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:16.312488 systemd-logind[1963]: New session 3 of user core. Nov 1 00:22:16.316984 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 00:22:16.376948 kubelet[2218]: E1101 00:22:16.376891 2218 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:22:16.379896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:22:16.380100 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:22:16.435582 sshd[2245]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:16.438965 systemd[1]: sshd@2-172.31.30.202:22-139.178.89.65:44160.service: Deactivated successfully. Nov 1 00:22:16.441136 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:22:16.442467 systemd-logind[1963]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:22:16.444063 systemd-logind[1963]: Removed session 3. Nov 1 00:22:16.467053 systemd[1]: Started sshd@3-172.31.30.202:22-139.178.89.65:44164.service - OpenSSH per-connection server daemon (139.178.89.65:44164). Nov 1 00:22:16.592621 ntpd[1957]: Listen normally on 7 eth0 [fe80::45c:2dff:fe5d:87a7%2]:123 Nov 1 00:22:16.593023 ntpd[1957]: 1 Nov 00:22:16 ntpd[1957]: Listen normally on 7 eth0 [fe80::45c:2dff:fe5d:87a7%2]:123 Nov 1 00:22:16.630782 sshd[2253]: Accepted publickey for core from 139.178.89.65 port 44164 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:22:16.632267 sshd[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:16.637028 systemd-logind[1963]: New session 4 of user core. Nov 1 00:22:16.643021 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 00:22:16.768075 sshd[2253]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:16.770946 systemd[1]: sshd@3-172.31.30.202:22-139.178.89.65:44164.service: Deactivated successfully. Nov 1 00:22:16.773359 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:22:16.774968 systemd-logind[1963]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:22:16.776266 systemd-logind[1963]: Removed session 4. Nov 1 00:22:16.802984 systemd[1]: Started sshd@4-172.31.30.202:22-139.178.89.65:44172.service - OpenSSH per-connection server daemon (139.178.89.65:44172). Nov 1 00:22:16.963118 sshd[2260]: Accepted publickey for core from 139.178.89.65 port 44172 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:22:16.964916 sshd[2260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:16.970085 systemd-logind[1963]: New session 5 of user core. Nov 1 00:22:16.980034 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 00:22:17.124671 sudo[2263]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:22:17.125108 sudo[2263]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:22:17.140685 sudo[2263]: pam_unix(sudo:session): session closed for user root Nov 1 00:22:17.163916 sshd[2260]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:17.168642 systemd[1]: sshd@4-172.31.30.202:22-139.178.89.65:44172.service: Deactivated successfully. Nov 1 00:22:17.170862 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:22:17.171709 systemd-logind[1963]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:22:17.173059 systemd-logind[1963]: Removed session 5. Nov 1 00:22:17.193974 systemd[1]: Started sshd@5-172.31.30.202:22-139.178.89.65:44188.service - OpenSSH per-connection server daemon (139.178.89.65:44188). Nov 1 00:22:17.364557 sshd[2268]: Accepted publickey for core from 139.178.89.65 port 44188 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:22:17.366138 sshd[2268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:17.371148 systemd-logind[1963]: New session 6 of user core. Nov 1 00:22:17.382015 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 00:22:17.479550 sudo[2272]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:22:17.479872 sudo[2272]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:22:17.483432 sudo[2272]: pam_unix(sudo:session): session closed for user root Nov 1 00:22:17.489458 sudo[2271]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 00:22:17.489771 sudo[2271]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:22:17.510084 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 1 00:22:17.511920 auditctl[2275]: No rules Nov 1 00:22:17.512281 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:22:17.512469 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 1 00:22:17.515033 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:22:17.544509 augenrules[2293]: No rules Nov 1 00:22:17.545952 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:22:17.548302 sudo[2271]: pam_unix(sudo:session): session closed for user root Nov 1 00:22:17.571433 sshd[2268]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:17.574318 systemd[1]: sshd@5-172.31.30.202:22-139.178.89.65:44188.service: Deactivated successfully. Nov 1 00:22:17.576188 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:22:17.577409 systemd-logind[1963]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:22:17.578366 systemd-logind[1963]: Removed session 6. Nov 1 00:22:17.606117 systemd[1]: Started sshd@6-172.31.30.202:22-139.178.89.65:44190.service - OpenSSH per-connection server daemon (139.178.89.65:44190). Nov 1 00:22:17.755126 sshd[2301]: Accepted publickey for core from 139.178.89.65 port 44190 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:22:17.757100 sshd[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:17.762055 systemd-logind[1963]: New session 7 of user core. Nov 1 00:22:17.768934 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 00:22:17.864056 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:22:17.864349 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:22:18.739175 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 00:22:18.739317 (dockerd)[2320]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 00:22:19.366851 dockerd[2320]: time="2025-11-01T00:22:19.366767145Z" level=info msg="Starting up" Nov 1 00:22:19.744752 systemd-resolved[1896]: Clock change detected. Flushing caches. Nov 1 00:22:19.832996 dockerd[2320]: time="2025-11-01T00:22:19.832886891Z" level=info msg="Loading containers: start." Nov 1 00:22:19.990998 kernel: Initializing XFRM netlink socket Nov 1 00:22:20.030628 (udev-worker)[2342]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:22:20.112494 systemd-networkd[1895]: docker0: Link UP Nov 1 00:22:20.137566 dockerd[2320]: time="2025-11-01T00:22:20.137521216Z" level=info msg="Loading containers: done." Nov 1 00:22:20.172621 dockerd[2320]: time="2025-11-01T00:22:20.172285932Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:22:20.172621 dockerd[2320]: time="2025-11-01T00:22:20.172396221Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 1 00:22:20.172621 dockerd[2320]: time="2025-11-01T00:22:20.172503426Z" level=info msg="Daemon has completed initialization" Nov 1 00:22:20.238353 dockerd[2320]: time="2025-11-01T00:22:20.237768454Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:22:20.238120 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 00:22:21.186035 containerd[1985]: time="2025-11-01T00:22:21.185986699Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 1 00:22:21.792297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3007332819.mount: Deactivated successfully. Nov 1 00:22:23.586929 containerd[1985]: time="2025-11-01T00:22:23.586859457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:23.588586 containerd[1985]: time="2025-11-01T00:22:23.588526742Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Nov 1 00:22:23.589242 containerd[1985]: time="2025-11-01T00:22:23.589175040Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:23.592562 containerd[1985]: time="2025-11-01T00:22:23.592102944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:23.593360 containerd[1985]: time="2025-11-01T00:22:23.593322418Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.407294026s" Nov 1 00:22:23.593452 containerd[1985]: time="2025-11-01T00:22:23.593368525Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 1 00:22:23.594579 containerd[1985]: time="2025-11-01T00:22:23.594550815Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 1 00:22:25.697492 containerd[1985]: time="2025-11-01T00:22:25.696937862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:25.706758 containerd[1985]: time="2025-11-01T00:22:25.706667978Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Nov 1 00:22:25.717442 containerd[1985]: time="2025-11-01T00:22:25.716662069Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:25.729774 containerd[1985]: time="2025-11-01T00:22:25.728635473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:25.731640 containerd[1985]: time="2025-11-01T00:22:25.731506244Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 2.136925305s" Nov 1 00:22:25.731640 containerd[1985]: time="2025-11-01T00:22:25.731619897Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 1 00:22:25.735263 containerd[1985]: time="2025-11-01T00:22:25.734947611Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 1 00:22:26.672132 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:22:26.687196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:26.927159 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:26.939157 (kubelet)[2532]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:22:27.032977 kubelet[2532]: E1101 00:22:27.032923 2532 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:22:27.038342 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:22:27.038530 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:22:27.279316 containerd[1985]: time="2025-11-01T00:22:27.279177432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:27.281070 containerd[1985]: time="2025-11-01T00:22:27.280861449Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Nov 1 00:22:27.282628 containerd[1985]: time="2025-11-01T00:22:27.282212163Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:27.285248 containerd[1985]: time="2025-11-01T00:22:27.285214817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:27.286380 containerd[1985]: time="2025-11-01T00:22:27.286347885Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.551347275s" Nov 1 00:22:27.286474 containerd[1985]: time="2025-11-01T00:22:27.286458184Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 1 00:22:27.287227 containerd[1985]: time="2025-11-01T00:22:27.287191018Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 1 00:22:28.403650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2816348604.mount: Deactivated successfully. Nov 1 00:22:28.800512 containerd[1985]: time="2025-11-01T00:22:28.800456432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:28.801544 containerd[1985]: time="2025-11-01T00:22:28.801377417Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Nov 1 00:22:28.802941 containerd[1985]: time="2025-11-01T00:22:28.802668900Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:28.805288 containerd[1985]: time="2025-11-01T00:22:28.805253977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:28.806143 containerd[1985]: time="2025-11-01T00:22:28.806105744Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.518872135s" Nov 1 00:22:28.806219 containerd[1985]: time="2025-11-01T00:22:28.806143199Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 1 00:22:28.806908 containerd[1985]: time="2025-11-01T00:22:28.806880076Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 1 00:22:29.414026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1427821968.mount: Deactivated successfully. Nov 1 00:22:31.300681 containerd[1985]: time="2025-11-01T00:22:31.300612519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:31.301963 containerd[1985]: time="2025-11-01T00:22:31.301714520Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Nov 1 00:22:31.303383 containerd[1985]: time="2025-11-01T00:22:31.302983601Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:31.308286 containerd[1985]: time="2025-11-01T00:22:31.308234298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:31.309575 containerd[1985]: time="2025-11-01T00:22:31.309527795Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.502601611s" Nov 1 00:22:31.309693 containerd[1985]: time="2025-11-01T00:22:31.309580671Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 1 00:22:31.310992 containerd[1985]: time="2025-11-01T00:22:31.310960826Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 1 00:22:31.762910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1586185304.mount: Deactivated successfully. Nov 1 00:22:31.767770 containerd[1985]: time="2025-11-01T00:22:31.767690351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:31.768936 containerd[1985]: time="2025-11-01T00:22:31.768880958Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Nov 1 00:22:31.770361 containerd[1985]: time="2025-11-01T00:22:31.770305148Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:31.773564 containerd[1985]: time="2025-11-01T00:22:31.772786543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:31.773564 containerd[1985]: time="2025-11-01T00:22:31.773454835Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 462.459955ms" Nov 1 00:22:31.773564 containerd[1985]: time="2025-11-01T00:22:31.773482409Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 1 00:22:31.774296 containerd[1985]: time="2025-11-01T00:22:31.774269241Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 1 00:22:36.372066 containerd[1985]: time="2025-11-01T00:22:36.371954687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:36.373789 containerd[1985]: time="2025-11-01T00:22:36.373702542Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Nov 1 00:22:36.375709 containerd[1985]: time="2025-11-01T00:22:36.375014844Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:36.378455 containerd[1985]: time="2025-11-01T00:22:36.378409554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:36.380137 containerd[1985]: time="2025-11-01T00:22:36.380088793Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 4.605700937s" Nov 1 00:22:36.380263 containerd[1985]: time="2025-11-01T00:22:36.380145569Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 1 00:22:37.170130 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:22:37.177148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:37.429927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:37.438157 (kubelet)[2673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:22:37.523612 kubelet[2673]: E1101 00:22:37.523565 2673 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:22:37.527741 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:22:37.528139 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:22:39.461313 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:39.466103 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:39.503231 systemd[1]: Reloading requested from client PID 2688 ('systemctl') (unit session-7.scope)... Nov 1 00:22:39.503248 systemd[1]: Reloading... Nov 1 00:22:39.607843 zram_generator::config[2729]: No configuration found. Nov 1 00:22:39.784641 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:22:39.876562 systemd[1]: Reloading finished in 372 ms. Nov 1 00:22:39.924794 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 00:22:39.924933 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 00:22:39.925266 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:39.933449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:40.139126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:40.150176 (kubelet)[2791]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:22:40.217051 kubelet[2791]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:22:40.217051 kubelet[2791]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:22:40.217051 kubelet[2791]: I1101 00:22:40.217150 2791 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:22:40.718427 kubelet[2791]: I1101 00:22:40.718276 2791 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 00:22:40.718427 kubelet[2791]: I1101 00:22:40.718416 2791 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:22:40.724021 kubelet[2791]: I1101 00:22:40.723954 2791 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 00:22:40.724021 kubelet[2791]: I1101 00:22:40.723996 2791 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:22:40.724276 kubelet[2791]: I1101 00:22:40.724259 2791 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:22:40.740279 kubelet[2791]: I1101 00:22:40.740225 2791 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:22:40.741953 kubelet[2791]: E1101 00:22:40.741484 2791 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.30.202:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.202:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 00:22:40.754233 kubelet[2791]: E1101 00:22:40.754162 2791 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:22:40.754365 kubelet[2791]: I1101 00:22:40.754266 2791 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 00:22:40.767626 kubelet[2791]: I1101 00:22:40.767109 2791 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 00:22:40.769970 kubelet[2791]: I1101 00:22:40.769714 2791 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:22:40.772249 kubelet[2791]: I1101 00:22:40.769802 2791 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-202","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:22:40.772564 kubelet[2791]: I1101 00:22:40.772366 2791 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:22:40.772564 kubelet[2791]: I1101 00:22:40.772386 2791 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 00:22:40.772564 kubelet[2791]: I1101 00:22:40.772526 2791 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 00:22:40.775420 kubelet[2791]: I1101 00:22:40.775376 2791 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:40.778481 kubelet[2791]: I1101 00:22:40.778440 2791 kubelet.go:475] "Attempting to sync node with API server" Nov 1 00:22:40.778481 kubelet[2791]: I1101 00:22:40.778481 2791 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:22:40.779660 kubelet[2791]: I1101 00:22:40.779437 2791 kubelet.go:387] "Adding apiserver pod source" Nov 1 00:22:40.782188 kubelet[2791]: I1101 00:22:40.781968 2791 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:22:40.787189 kubelet[2791]: E1101 00:22:40.785955 2791 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.30.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-202&limit=500&resourceVersion=0\": dial tcp 172.31.30.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:22:40.787189 kubelet[2791]: E1101 00:22:40.786108 2791 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.30.202:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:22:40.787392 kubelet[2791]: I1101 00:22:40.787305 2791 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:22:40.790962 kubelet[2791]: I1101 00:22:40.790549 2791 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:22:40.790962 kubelet[2791]: I1101 00:22:40.790609 2791 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 00:22:40.790962 kubelet[2791]: W1101 00:22:40.790673 2791 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:22:40.795550 kubelet[2791]: I1101 00:22:40.795410 2791 server.go:1262] "Started kubelet" Nov 1 00:22:40.797279 kubelet[2791]: I1101 00:22:40.797244 2791 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:22:40.803263 kubelet[2791]: I1101 00:22:40.802851 2791 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:22:40.808442 kubelet[2791]: I1101 00:22:40.808412 2791 server.go:310] "Adding debug handlers to kubelet server" Nov 1 00:22:40.812693 kubelet[2791]: I1101 00:22:40.812276 2791 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 00:22:40.812693 kubelet[2791]: E1101 00:22:40.812590 2791 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-30-202\" not found" Nov 1 00:22:40.816110 kubelet[2791]: I1101 00:22:40.815891 2791 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 00:22:40.816110 kubelet[2791]: I1101 00:22:40.815911 2791 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:22:40.816110 kubelet[2791]: I1101 00:22:40.815960 2791 reconciler.go:29] "Reconciler: start to sync state" Nov 1 00:22:40.816110 kubelet[2791]: I1101 00:22:40.815980 2791 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 00:22:40.816353 kubelet[2791]: I1101 00:22:40.816186 2791 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:22:40.822803 kubelet[2791]: I1101 00:22:40.822754 2791 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:22:40.825051 kubelet[2791]: E1101 00:22:40.825011 2791 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-202?timeout=10s\": dial tcp 172.31.30.202:6443: connect: connection refused" interval="200ms" Nov 1 00:22:40.827811 kubelet[2791]: E1101 00:22:40.825237 2791 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.202:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.202:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-202.1873ba2823e0a989 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-202,UID:ip-172-31-30-202,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-202,},FirstTimestamp:2025-11-01 00:22:40.795380105 +0000 UTC m=+0.641469089,LastTimestamp:2025-11-01 00:22:40.795380105 +0000 UTC m=+0.641469089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-202,}" Nov 1 00:22:40.831495 kubelet[2791]: E1101 00:22:40.830703 2791 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.30.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:22:40.832861 kubelet[2791]: I1101 00:22:40.831852 2791 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:22:40.832861 kubelet[2791]: I1101 00:22:40.831872 2791 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:22:40.832861 kubelet[2791]: I1101 00:22:40.831961 2791 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:22:40.838081 kubelet[2791]: E1101 00:22:40.838050 2791 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:22:40.848099 kubelet[2791]: I1101 00:22:40.848048 2791 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 00:22:40.855266 kubelet[2791]: I1101 00:22:40.855230 2791 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 00:22:40.855266 kubelet[2791]: I1101 00:22:40.855263 2791 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 00:22:40.855441 kubelet[2791]: I1101 00:22:40.855303 2791 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 00:22:40.855441 kubelet[2791]: E1101 00:22:40.855350 2791 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:22:40.860090 kubelet[2791]: E1101 00:22:40.859007 2791 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.30.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:22:40.866328 kubelet[2791]: I1101 00:22:40.866294 2791 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:22:40.866477 kubelet[2791]: I1101 00:22:40.866465 2791 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:22:40.866579 kubelet[2791]: I1101 00:22:40.866571 2791 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:40.868795 kubelet[2791]: I1101 00:22:40.868771 2791 policy_none.go:49] "None policy: Start" Nov 1 00:22:40.869438 kubelet[2791]: I1101 00:22:40.869420 2791 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 00:22:40.869575 kubelet[2791]: I1101 00:22:40.869562 2791 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 00:22:40.871910 kubelet[2791]: I1101 00:22:40.871879 2791 policy_none.go:47] "Start" Nov 1 00:22:40.878679 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 1 00:22:40.901908 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 1 00:22:40.913939 kubelet[2791]: E1101 00:22:40.913281 2791 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-30-202\" not found" Nov 1 00:22:40.913707 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 1 00:22:40.915755 kubelet[2791]: E1101 00:22:40.915340 2791 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:22:40.915850 kubelet[2791]: I1101 00:22:40.915758 2791 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:22:40.915850 kubelet[2791]: I1101 00:22:40.915774 2791 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:22:40.917567 kubelet[2791]: I1101 00:22:40.917534 2791 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:22:40.918418 kubelet[2791]: E1101 00:22:40.918390 2791 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:22:40.918510 kubelet[2791]: E1101 00:22:40.918433 2791 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-202\" not found" Nov 1 00:22:40.973968 systemd[1]: Created slice kubepods-burstable-podf581918bf8231bf86b359e79d1ea893a.slice - libcontainer container kubepods-burstable-podf581918bf8231bf86b359e79d1ea893a.slice. Nov 1 00:22:40.983010 kubelet[2791]: E1101 00:22:40.982769 2791 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-202\" not found" node="ip-172-31-30-202" Nov 1 00:22:40.986509 systemd[1]: Created slice kubepods-burstable-podd64cf93905a643a7960241eaadad6162.slice - libcontainer container kubepods-burstable-podd64cf93905a643a7960241eaadad6162.slice. Nov 1 00:22:40.998545 kubelet[2791]: E1101 00:22:40.998333 2791 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-202\" not found" node="ip-172-31-30-202" Nov 1 00:22:41.001920 systemd[1]: Created slice kubepods-burstable-pod8cca941e724b3c85d96e95d49694bc0d.slice - libcontainer container kubepods-burstable-pod8cca941e724b3c85d96e95d49694bc0d.slice. Nov 1 00:22:41.004345 kubelet[2791]: E1101 00:22:41.004313 2791 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-202\" not found" node="ip-172-31-30-202" Nov 1 00:22:41.018261 kubelet[2791]: I1101 00:22:41.018224 2791 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-202" Nov 1 00:22:41.018582 kubelet[2791]: E1101 00:22:41.018531 2791 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.202:6443/api/v1/nodes\": dial tcp 172.31.30.202:6443: connect: connection refused" node="ip-172-31-30-202" Nov 1 00:22:41.026208 kubelet[2791]: E1101 00:22:41.026127 2791 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-202?timeout=10s\": dial tcp 172.31.30.202:6443: connect: connection refused" interval="400ms" Nov 1 00:22:41.118145 kubelet[2791]: I1101 00:22:41.117866 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d64cf93905a643a7960241eaadad6162-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-202\" (UID: \"d64cf93905a643a7960241eaadad6162\") " pod="kube-system/kube-controller-manager-ip-172-31-30-202" Nov 1 00:22:41.118145 kubelet[2791]: I1101 00:22:41.117912 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8cca941e724b3c85d96e95d49694bc0d-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-202\" (UID: \"8cca941e724b3c85d96e95d49694bc0d\") " pod="kube-system/kube-scheduler-ip-172-31-30-202" Nov 1 00:22:41.118145 kubelet[2791]: I1101 00:22:41.117983 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f581918bf8231bf86b359e79d1ea893a-ca-certs\") pod \"kube-apiserver-ip-172-31-30-202\" (UID: \"f581918bf8231bf86b359e79d1ea893a\") " pod="kube-system/kube-apiserver-ip-172-31-30-202" Nov 1 00:22:41.118145 kubelet[2791]: I1101 00:22:41.118013 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d64cf93905a643a7960241eaadad6162-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-202\" (UID: \"d64cf93905a643a7960241eaadad6162\") " pod="kube-system/kube-controller-manager-ip-172-31-30-202" Nov 1 00:22:41.118145 kubelet[2791]: I1101 00:22:41.118034 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d64cf93905a643a7960241eaadad6162-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-202\" (UID: \"d64cf93905a643a7960241eaadad6162\") " pod="kube-system/kube-controller-manager-ip-172-31-30-202" Nov 1 00:22:41.118379 kubelet[2791]: I1101 00:22:41.118055 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d64cf93905a643a7960241eaadad6162-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-202\" (UID: \"d64cf93905a643a7960241eaadad6162\") " pod="kube-system/kube-controller-manager-ip-172-31-30-202" Nov 1 00:22:41.118379 kubelet[2791]: I1101 00:22:41.118069 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f581918bf8231bf86b359e79d1ea893a-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-202\" (UID: \"f581918bf8231bf86b359e79d1ea893a\") " pod="kube-system/kube-apiserver-ip-172-31-30-202" Nov 1 00:22:41.118379 kubelet[2791]: I1101 00:22:41.118084 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f581918bf8231bf86b359e79d1ea893a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-202\" (UID: \"f581918bf8231bf86b359e79d1ea893a\") " pod="kube-system/kube-apiserver-ip-172-31-30-202" Nov 1 00:22:41.118379 kubelet[2791]: I1101 00:22:41.118099 2791 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d64cf93905a643a7960241eaadad6162-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-202\" (UID: \"d64cf93905a643a7960241eaadad6162\") " pod="kube-system/kube-controller-manager-ip-172-31-30-202" Nov 1 00:22:41.220339 kubelet[2791]: I1101 00:22:41.220305 2791 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-202" Nov 1 00:22:41.220965 kubelet[2791]: E1101 00:22:41.220665 2791 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.202:6443/api/v1/nodes\": dial tcp 172.31.30.202:6443: connect: connection refused" node="ip-172-31-30-202" Nov 1 00:22:41.288554 containerd[1985]: time="2025-11-01T00:22:41.288420712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-202,Uid:f581918bf8231bf86b359e79d1ea893a,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:41.319242 containerd[1985]: time="2025-11-01T00:22:41.318926039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-202,Uid:8cca941e724b3c85d96e95d49694bc0d,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:41.321262 containerd[1985]: time="2025-11-01T00:22:41.321228465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-202,Uid:d64cf93905a643a7960241eaadad6162,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:41.427917 kubelet[2791]: E1101 00:22:41.427371 2791 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-202?timeout=10s\": dial tcp 172.31.30.202:6443: connect: connection refused" interval="800ms" Nov 1 00:22:41.623188 kubelet[2791]: I1101 00:22:41.623091 2791 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-202" Nov 1 00:22:41.623462 kubelet[2791]: E1101 00:22:41.623385 2791 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.202:6443/api/v1/nodes\": dial tcp 172.31.30.202:6443: connect: connection refused" node="ip-172-31-30-202" Nov 1 00:22:41.696750 kubelet[2791]: E1101 00:22:41.696684 2791 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.30.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:22:41.829403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount601117130.mount: Deactivated successfully. Nov 1 00:22:41.848107 containerd[1985]: time="2025-11-01T00:22:41.848048944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:41.850243 containerd[1985]: time="2025-11-01T00:22:41.850188845Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:41.852595 containerd[1985]: time="2025-11-01T00:22:41.852534380Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:22:41.854451 containerd[1985]: time="2025-11-01T00:22:41.854371753Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:41.856251 containerd[1985]: time="2025-11-01T00:22:41.856200643Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:22:41.858930 containerd[1985]: time="2025-11-01T00:22:41.858815537Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:41.863758 containerd[1985]: time="2025-11-01T00:22:41.863190493Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 1 00:22:41.869431 containerd[1985]: time="2025-11-01T00:22:41.869387452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:41.870214 containerd[1985]: time="2025-11-01T00:22:41.870186306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 551.16571ms" Nov 1 00:22:41.873408 containerd[1985]: time="2025-11-01T00:22:41.872454766Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 583.951956ms" Nov 1 00:22:41.874085 containerd[1985]: time="2025-11-01T00:22:41.873741282Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 552.182236ms" Nov 1 00:22:42.120354 containerd[1985]: time="2025-11-01T00:22:42.119371037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:42.120354 containerd[1985]: time="2025-11-01T00:22:42.119457723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:42.120354 containerd[1985]: time="2025-11-01T00:22:42.119482055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:42.120354 containerd[1985]: time="2025-11-01T00:22:42.119591654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:42.123107 containerd[1985]: time="2025-11-01T00:22:42.122934179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:42.124414 containerd[1985]: time="2025-11-01T00:22:42.124234855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:42.126281 containerd[1985]: time="2025-11-01T00:22:42.125525505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:42.127547 containerd[1985]: time="2025-11-01T00:22:42.127365727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:42.127547 containerd[1985]: time="2025-11-01T00:22:42.126853833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:42.127547 containerd[1985]: time="2025-11-01T00:22:42.126939218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:42.127547 containerd[1985]: time="2025-11-01T00:22:42.126970518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:42.127547 containerd[1985]: time="2025-11-01T00:22:42.127121327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:42.154005 systemd[1]: Started cri-containerd-d5944f72ed93198fbd8ff737914421e9b6b7cd78d1872fdedbd137795d5d13b3.scope - libcontainer container d5944f72ed93198fbd8ff737914421e9b6b7cd78d1872fdedbd137795d5d13b3. Nov 1 00:22:42.169398 systemd[1]: Started cri-containerd-05e2b960c843cd6dddea84084b9935f25b5627674dd18f5b204f64d6c6ae6c69.scope - libcontainer container 05e2b960c843cd6dddea84084b9935f25b5627674dd18f5b204f64d6c6ae6c69. Nov 1 00:22:42.190867 systemd[1]: Started cri-containerd-69bb8226b5a6a6e72553ef498390816edb6842845785d2267c361582bc8a905b.scope - libcontainer container 69bb8226b5a6a6e72553ef498390816edb6842845785d2267c361582bc8a905b. Nov 1 00:22:42.229175 kubelet[2791]: E1101 00:22:42.229130 2791 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-202?timeout=10s\": dial tcp 172.31.30.202:6443: connect: connection refused" interval="1.6s" Nov 1 00:22:42.236299 containerd[1985]: time="2025-11-01T00:22:42.236259308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-202,Uid:f581918bf8231bf86b359e79d1ea893a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5944f72ed93198fbd8ff737914421e9b6b7cd78d1872fdedbd137795d5d13b3\"" Nov 1 00:22:42.251751 containerd[1985]: time="2025-11-01T00:22:42.251298992Z" level=info msg="CreateContainer within sandbox \"d5944f72ed93198fbd8ff737914421e9b6b7cd78d1872fdedbd137795d5d13b3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:22:42.278451 containerd[1985]: time="2025-11-01T00:22:42.278385617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-202,Uid:d64cf93905a643a7960241eaadad6162,Namespace:kube-system,Attempt:0,} returns sandbox id \"05e2b960c843cd6dddea84084b9935f25b5627674dd18f5b204f64d6c6ae6c69\"" Nov 1 00:22:42.284796 containerd[1985]: time="2025-11-01T00:22:42.284759730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-202,Uid:8cca941e724b3c85d96e95d49694bc0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"69bb8226b5a6a6e72553ef498390816edb6842845785d2267c361582bc8a905b\"" Nov 1 00:22:42.286349 containerd[1985]: time="2025-11-01T00:22:42.286261746Z" level=info msg="CreateContainer within sandbox \"05e2b960c843cd6dddea84084b9935f25b5627674dd18f5b204f64d6c6ae6c69\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:22:42.306617 kubelet[2791]: E1101 00:22:42.306573 2791 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.30.202:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:22:42.313046 containerd[1985]: time="2025-11-01T00:22:42.313010686Z" level=info msg="CreateContainer within sandbox \"69bb8226b5a6a6e72553ef498390816edb6842845785d2267c361582bc8a905b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:22:42.328795 kubelet[2791]: E1101 00:22:42.328646 2791 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.30.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-202&limit=500&resourceVersion=0\": dial tcp 172.31.30.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:22:42.360364 containerd[1985]: time="2025-11-01T00:22:42.360097798Z" level=info msg="CreateContainer within sandbox \"69bb8226b5a6a6e72553ef498390816edb6842845785d2267c361582bc8a905b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d8eeb7e34c2d5c3d3bd42b8382c5ebbe0d15f3551992ec00ee5e90f420823754\"" Nov 1 00:22:42.363060 containerd[1985]: time="2025-11-01T00:22:42.363015562Z" level=info msg="CreateContainer within sandbox \"d5944f72ed93198fbd8ff737914421e9b6b7cd78d1872fdedbd137795d5d13b3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"871317a5504c9ff42d25d509ad496faa26e10ccf1900c7b9783f12660afd8acc\"" Nov 1 00:22:42.363356 containerd[1985]: time="2025-11-01T00:22:42.363326809Z" level=info msg="StartContainer for \"d8eeb7e34c2d5c3d3bd42b8382c5ebbe0d15f3551992ec00ee5e90f420823754\"" Nov 1 00:22:42.367750 containerd[1985]: time="2025-11-01T00:22:42.366237385Z" level=info msg="CreateContainer within sandbox \"05e2b960c843cd6dddea84084b9935f25b5627674dd18f5b204f64d6c6ae6c69\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"17cda95fd9c02e74b96609b8c689db604fa5eb59ce99b1bfc0e297cca10cc616\"" Nov 1 00:22:42.367750 containerd[1985]: time="2025-11-01T00:22:42.366463370Z" level=info msg="StartContainer for \"871317a5504c9ff42d25d509ad496faa26e10ccf1900c7b9783f12660afd8acc\"" Nov 1 00:22:42.381243 containerd[1985]: time="2025-11-01T00:22:42.380457545Z" level=info msg="StartContainer for \"17cda95fd9c02e74b96609b8c689db604fa5eb59ce99b1bfc0e297cca10cc616\"" Nov 1 00:22:42.425362 kubelet[2791]: I1101 00:22:42.425114 2791 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-202" Nov 1 00:22:42.425504 kubelet[2791]: E1101 00:22:42.425468 2791 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.202:6443/api/v1/nodes\": dial tcp 172.31.30.202:6443: connect: connection refused" node="ip-172-31-30-202" Nov 1 00:22:42.425679 systemd[1]: Started cri-containerd-871317a5504c9ff42d25d509ad496faa26e10ccf1900c7b9783f12660afd8acc.scope - libcontainer container 871317a5504c9ff42d25d509ad496faa26e10ccf1900c7b9783f12660afd8acc. Nov 1 00:22:42.427416 systemd[1]: Started cri-containerd-d8eeb7e34c2d5c3d3bd42b8382c5ebbe0d15f3551992ec00ee5e90f420823754.scope - libcontainer container d8eeb7e34c2d5c3d3bd42b8382c5ebbe0d15f3551992ec00ee5e90f420823754. Nov 1 00:22:42.438249 systemd[1]: Started cri-containerd-17cda95fd9c02e74b96609b8c689db604fa5eb59ce99b1bfc0e297cca10cc616.scope - libcontainer container 17cda95fd9c02e74b96609b8c689db604fa5eb59ce99b1bfc0e297cca10cc616. Nov 1 00:22:42.458969 kubelet[2791]: E1101 00:22:42.458930 2791 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.30.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:22:42.529039 containerd[1985]: time="2025-11-01T00:22:42.528988574Z" level=info msg="StartContainer for \"871317a5504c9ff42d25d509ad496faa26e10ccf1900c7b9783f12660afd8acc\" returns successfully" Nov 1 00:22:42.539667 containerd[1985]: time="2025-11-01T00:22:42.539418299Z" level=info msg="StartContainer for \"d8eeb7e34c2d5c3d3bd42b8382c5ebbe0d15f3551992ec00ee5e90f420823754\" returns successfully" Nov 1 00:22:42.544913 containerd[1985]: time="2025-11-01T00:22:42.544866055Z" level=info msg="StartContainer for \"17cda95fd9c02e74b96609b8c689db604fa5eb59ce99b1bfc0e297cca10cc616\" returns successfully" Nov 1 00:22:42.748318 kubelet[2791]: E1101 00:22:42.748261 2791 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.30.202:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.202:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 00:22:42.870224 kubelet[2791]: E1101 00:22:42.870184 2791 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-202\" not found" node="ip-172-31-30-202" Nov 1 00:22:42.873533 kubelet[2791]: E1101 00:22:42.873500 2791 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-202\" not found" node="ip-172-31-30-202" Nov 1 00:22:42.887100 kubelet[2791]: E1101 00:22:42.887059 2791 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-202\" not found" node="ip-172-31-30-202" Nov 1 00:22:43.525582 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 1 00:22:43.830194 kubelet[2791]: E1101 00:22:43.829892 2791 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-202?timeout=10s\": dial tcp 172.31.30.202:6443: connect: connection refused" interval="3.2s" Nov 1 00:22:43.878327 kubelet[2791]: E1101 00:22:43.878279 2791 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.30.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:22:43.880224 kubelet[2791]: E1101 00:22:43.879971 2791 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-202\" not found" node="ip-172-31-30-202" Nov 1 00:22:43.880771 kubelet[2791]: E1101 00:22:43.880616 2791 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-202\" not found" node="ip-172-31-30-202" Nov 1 00:22:43.951506 kubelet[2791]: E1101 00:22:43.951446 2791 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.30.202:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:22:44.027448 kubelet[2791]: I1101 00:22:44.027285 2791 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-202" Nov 1 00:22:44.027875 kubelet[2791]: E1101 00:22:44.027845 2791 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.202:6443/api/v1/nodes\": dial tcp 172.31.30.202:6443: connect: connection refused" node="ip-172-31-30-202" Nov 1 00:22:44.198236 kubelet[2791]: E1101 00:22:44.198182 2791 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.30.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-202&limit=500&resourceVersion=0\": dial tcp 172.31.30.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:22:44.768641 kubelet[2791]: E1101 00:22:44.768589 2791 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.30.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:22:45.597226 kubelet[2791]: E1101 00:22:45.597191 2791 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-202\" not found" node="ip-172-31-30-202" Nov 1 00:22:47.232075 kubelet[2791]: I1101 00:22:47.231972 2791 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-202" Nov 1 00:22:47.272064 kubelet[2791]: E1101 00:22:47.272004 2791 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-202\" not found" node="ip-172-31-30-202" Nov 1 00:22:47.367628 kubelet[2791]: I1101 00:22:47.367372 2791 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-202" Nov 1 00:22:47.367628 kubelet[2791]: E1101 00:22:47.367515 2791 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ip-172-31-30-202\": node \"ip-172-31-30-202\" not found" Nov 1 00:22:47.392526 kubelet[2791]: E1101 00:22:47.392488 2791 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-30-202\" not found" Nov 1 00:22:47.492846 kubelet[2791]: E1101 00:22:47.492711 2791 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-30-202\" not found" Nov 1 00:22:47.593632 kubelet[2791]: E1101 00:22:47.593560 2791 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-30-202\" not found" Nov 1 00:22:47.693995 kubelet[2791]: E1101 00:22:47.693953 2791 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-30-202\" not found" Nov 1 00:22:47.794905 kubelet[2791]: E1101 00:22:47.794784 2791 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-30-202\" not found" Nov 1 00:22:47.895572 kubelet[2791]: E1101 00:22:47.895497 2791 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-30-202\" not found" Nov 1 00:22:47.995825 kubelet[2791]: E1101 00:22:47.995712 2791 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-30-202\" not found" Nov 1 00:22:48.096786 kubelet[2791]: E1101 00:22:48.096638 2791 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-30-202\" not found" Nov 1 00:22:48.197837 kubelet[2791]: E1101 00:22:48.197778 2791 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-30-202\" not found" Nov 1 00:22:48.298219 kubelet[2791]: E1101 00:22:48.297961 2791 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-30-202\" not found" Nov 1 00:22:48.314244 kubelet[2791]: I1101 00:22:48.313881 2791 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-202" Nov 1 00:22:48.327980 kubelet[2791]: I1101 00:22:48.327935 2791 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-202" Nov 1 00:22:48.334474 kubelet[2791]: I1101 00:22:48.334425 2791 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-202" Nov 1 00:22:48.788742 kubelet[2791]: I1101 00:22:48.788663 2791 apiserver.go:52] "Watching apiserver" Nov 1 00:22:48.819924 kubelet[2791]: I1101 00:22:48.819818 2791 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 00:22:49.633958 systemd[1]: Reloading requested from client PID 3082 ('systemctl') (unit session-7.scope)... Nov 1 00:22:49.633977 systemd[1]: Reloading... Nov 1 00:22:49.769769 zram_generator::config[3128]: No configuration found. Nov 1 00:22:49.913773 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:22:50.039400 systemd[1]: Reloading finished in 404 ms. Nov 1 00:22:50.098435 kubelet[2791]: I1101 00:22:50.098167 2791 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:22:50.098351 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:50.119306 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:22:50.119685 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:50.119768 systemd[1]: kubelet.service: Consumed 1.072s CPU time, 123.1M memory peak, 0B memory swap peak. Nov 1 00:22:50.133004 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:50.418199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:50.430246 (kubelet)[3182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:22:50.505600 kubelet[3182]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:22:50.505978 kubelet[3182]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:22:50.506103 kubelet[3182]: I1101 00:22:50.506079 3182 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:22:50.513149 kubelet[3182]: I1101 00:22:50.513111 3182 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 00:22:50.513290 kubelet[3182]: I1101 00:22:50.513279 3182 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:22:50.513805 kubelet[3182]: I1101 00:22:50.513381 3182 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 00:22:50.513805 kubelet[3182]: I1101 00:22:50.513394 3182 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:22:50.517453 kubelet[3182]: I1101 00:22:50.517404 3182 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:22:50.518780 kubelet[3182]: I1101 00:22:50.518756 3182 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 1 00:22:50.522412 kubelet[3182]: I1101 00:22:50.522247 3182 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:22:50.570521 kubelet[3182]: E1101 00:22:50.570476 3182 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:22:50.570649 kubelet[3182]: I1101 00:22:50.570536 3182 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 00:22:50.573571 kubelet[3182]: I1101 00:22:50.573280 3182 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 00:22:50.575514 kubelet[3182]: I1101 00:22:50.575478 3182 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:22:50.576754 kubelet[3182]: I1101 00:22:50.575628 3182 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-202","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:22:50.576754 kubelet[3182]: I1101 00:22:50.575811 3182 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:22:50.576754 kubelet[3182]: I1101 00:22:50.575821 3182 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 00:22:50.576754 kubelet[3182]: I1101 00:22:50.575848 3182 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 00:22:50.577087 kubelet[3182]: I1101 00:22:50.577071 3182 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:50.583220 kubelet[3182]: I1101 00:22:50.582229 3182 kubelet.go:475] "Attempting to sync node with API server" Nov 1 00:22:50.583220 kubelet[3182]: I1101 00:22:50.582788 3182 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:22:50.583220 kubelet[3182]: I1101 00:22:50.582844 3182 kubelet.go:387] "Adding apiserver pod source" Nov 1 00:22:50.583220 kubelet[3182]: I1101 00:22:50.582866 3182 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:22:50.589623 kubelet[3182]: I1101 00:22:50.588849 3182 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:22:50.591064 kubelet[3182]: I1101 00:22:50.589957 3182 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:22:50.591064 kubelet[3182]: I1101 00:22:50.590018 3182 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 00:22:50.594605 kubelet[3182]: I1101 00:22:50.594580 3182 server.go:1262] "Started kubelet" Nov 1 00:22:50.610970 kubelet[3182]: I1101 00:22:50.610929 3182 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:22:50.629887 kubelet[3182]: I1101 00:22:50.629845 3182 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:22:50.635577 kubelet[3182]: I1101 00:22:50.635547 3182 server.go:310] "Adding debug handlers to kubelet server" Nov 1 00:22:50.644207 kubelet[3182]: I1101 00:22:50.643784 3182 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:22:50.644207 kubelet[3182]: I1101 00:22:50.643855 3182 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 00:22:50.644804 kubelet[3182]: I1101 00:22:50.644759 3182 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:22:50.645129 kubelet[3182]: I1101 00:22:50.645102 3182 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:22:50.653394 kubelet[3182]: I1101 00:22:50.652777 3182 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 00:22:50.654333 kubelet[3182]: I1101 00:22:50.654071 3182 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 00:22:50.655361 kubelet[3182]: I1101 00:22:50.654552 3182 reconciler.go:29] "Reconciler: start to sync state" Nov 1 00:22:50.661582 kubelet[3182]: I1101 00:22:50.661550 3182 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:22:50.663355 kubelet[3182]: I1101 00:22:50.663213 3182 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:22:50.672002 kubelet[3182]: E1101 00:22:50.670813 3182 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:22:50.675509 kubelet[3182]: I1101 00:22:50.675371 3182 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 00:22:50.676563 kubelet[3182]: I1101 00:22:50.676270 3182 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:22:50.679775 kubelet[3182]: I1101 00:22:50.679475 3182 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 00:22:50.679775 kubelet[3182]: I1101 00:22:50.679505 3182 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 00:22:50.679775 kubelet[3182]: I1101 00:22:50.679554 3182 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 00:22:50.679775 kubelet[3182]: E1101 00:22:50.679628 3182 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:22:50.780032 kubelet[3182]: E1101 00:22:50.779850 3182 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:22:50.794780 kubelet[3182]: I1101 00:22:50.794267 3182 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:22:50.794780 kubelet[3182]: I1101 00:22:50.794287 3182 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:22:50.794780 kubelet[3182]: I1101 00:22:50.794312 3182 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:50.794780 kubelet[3182]: I1101 00:22:50.794507 3182 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:22:50.794780 kubelet[3182]: I1101 00:22:50.794522 3182 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:22:50.794780 kubelet[3182]: I1101 00:22:50.794548 3182 policy_none.go:49] "None policy: Start" Nov 1 00:22:50.794780 kubelet[3182]: I1101 00:22:50.794559 3182 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 00:22:50.794780 kubelet[3182]: I1101 00:22:50.794569 3182 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 00:22:50.794780 kubelet[3182]: I1101 00:22:50.794684 3182 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 1 00:22:50.794780 kubelet[3182]: I1101 00:22:50.794693 3182 policy_none.go:47] "Start" Nov 1 00:22:50.804514 kubelet[3182]: E1101 00:22:50.803880 3182 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:22:50.804514 kubelet[3182]: I1101 00:22:50.804089 3182 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:22:50.804514 kubelet[3182]: I1101 00:22:50.804101 3182 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:22:50.804514 kubelet[3182]: I1101 00:22:50.804459 3182 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:22:50.809573 kubelet[3182]: E1101 00:22:50.809380 3182 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:22:50.923584 kubelet[3182]: I1101 00:22:50.922781 3182 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-202" Nov 1 00:22:50.934521 kubelet[3182]: I1101 00:22:50.934397 3182 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-30-202" Nov 1 00:22:50.934521 kubelet[3182]: I1101 00:22:50.934484 3182 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-202" Nov 1 00:22:50.984760 kubelet[3182]: I1101 00:22:50.981472 3182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-202" Nov 1 00:22:50.984760 kubelet[3182]: I1101 00:22:50.981596 3182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-202" Nov 1 00:22:50.984760 kubelet[3182]: I1101 00:22:50.981472 3182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-202" Nov 1 00:22:50.996822 kubelet[3182]: E1101 00:22:50.996688 3182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-202\" already exists" pod="kube-system/kube-scheduler-ip-172-31-30-202" Nov 1 00:22:50.998525 kubelet[3182]: E1101 00:22:50.998485 3182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-202\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-202" Nov 1 00:22:50.998642 kubelet[3182]: E1101 00:22:50.998483 3182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-30-202\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-30-202" Nov 1 00:22:51.159394 kubelet[3182]: I1101 00:22:51.159194 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d64cf93905a643a7960241eaadad6162-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-202\" (UID: \"d64cf93905a643a7960241eaadad6162\") " pod="kube-system/kube-controller-manager-ip-172-31-30-202" Nov 1 00:22:51.159394 kubelet[3182]: I1101 00:22:51.159239 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d64cf93905a643a7960241eaadad6162-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-202\" (UID: \"d64cf93905a643a7960241eaadad6162\") " pod="kube-system/kube-controller-manager-ip-172-31-30-202" Nov 1 00:22:51.159394 kubelet[3182]: I1101 00:22:51.159266 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d64cf93905a643a7960241eaadad6162-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-202\" (UID: \"d64cf93905a643a7960241eaadad6162\") " pod="kube-system/kube-controller-manager-ip-172-31-30-202" Nov 1 00:22:51.159394 kubelet[3182]: I1101 00:22:51.159291 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f581918bf8231bf86b359e79d1ea893a-ca-certs\") pod \"kube-apiserver-ip-172-31-30-202\" (UID: \"f581918bf8231bf86b359e79d1ea893a\") " pod="kube-system/kube-apiserver-ip-172-31-30-202" Nov 1 00:22:51.159394 kubelet[3182]: I1101 00:22:51.159315 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f581918bf8231bf86b359e79d1ea893a-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-202\" (UID: \"f581918bf8231bf86b359e79d1ea893a\") " pod="kube-system/kube-apiserver-ip-172-31-30-202" Nov 1 00:22:51.160928 kubelet[3182]: I1101 00:22:51.159335 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d64cf93905a643a7960241eaadad6162-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-202\" (UID: \"d64cf93905a643a7960241eaadad6162\") " pod="kube-system/kube-controller-manager-ip-172-31-30-202" Nov 1 00:22:51.160928 kubelet[3182]: I1101 00:22:51.159354 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d64cf93905a643a7960241eaadad6162-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-202\" (UID: \"d64cf93905a643a7960241eaadad6162\") " pod="kube-system/kube-controller-manager-ip-172-31-30-202" Nov 1 00:22:51.160928 kubelet[3182]: I1101 00:22:51.159378 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8cca941e724b3c85d96e95d49694bc0d-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-202\" (UID: \"8cca941e724b3c85d96e95d49694bc0d\") " pod="kube-system/kube-scheduler-ip-172-31-30-202" Nov 1 00:22:51.160928 kubelet[3182]: I1101 00:22:51.159495 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f581918bf8231bf86b359e79d1ea893a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-202\" (UID: \"f581918bf8231bf86b359e79d1ea893a\") " pod="kube-system/kube-apiserver-ip-172-31-30-202" Nov 1 00:22:51.605100 kubelet[3182]: I1101 00:22:51.603359 3182 apiserver.go:52] "Watching apiserver" Nov 1 00:22:51.655273 kubelet[3182]: I1101 00:22:51.655234 3182 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 00:22:51.761770 kubelet[3182]: I1101 00:22:51.760863 3182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-202" podStartSLOduration=3.760846589 podStartE2EDuration="3.760846589s" podCreationTimestamp="2025-11-01 00:22:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:51.759498761 +0000 UTC m=+1.320666705" watchObservedRunningTime="2025-11-01 00:22:51.760846589 +0000 UTC m=+1.322014516" Nov 1 00:22:51.761770 kubelet[3182]: I1101 00:22:51.760968 3182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-202" podStartSLOduration=3.760962915 podStartE2EDuration="3.760962915s" podCreationTimestamp="2025-11-01 00:22:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:51.739595909 +0000 UTC m=+1.300763845" watchObservedRunningTime="2025-11-01 00:22:51.760962915 +0000 UTC m=+1.322130848" Nov 1 00:22:51.768803 kubelet[3182]: I1101 00:22:51.766708 3182 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-202" Nov 1 00:22:51.779791 kubelet[3182]: E1101 00:22:51.778890 3182 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-30-202\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-30-202" Nov 1 00:22:51.787865 kubelet[3182]: I1101 00:22:51.787713 3182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-202" podStartSLOduration=3.78769688 podStartE2EDuration="3.78769688s" podCreationTimestamp="2025-11-01 00:22:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:51.7736234 +0000 UTC m=+1.334791334" watchObservedRunningTime="2025-11-01 00:22:51.78769688 +0000 UTC m=+1.348864858" Nov 1 00:22:55.085004 kubelet[3182]: I1101 00:22:55.084868 3182 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:22:55.087136 containerd[1985]: time="2025-11-01T00:22:55.086662366Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:22:55.087892 kubelet[3182]: I1101 00:22:55.086934 3182 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:22:55.688480 systemd[1]: Created slice kubepods-besteffort-pod9a783191_4a2c_42d8_ba76_e83e6dcc499b.slice - libcontainer container kubepods-besteffort-pod9a783191_4a2c_42d8_ba76_e83e6dcc499b.slice. Nov 1 00:22:55.693355 kubelet[3182]: I1101 00:22:55.693320 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9a783191-4a2c-42d8-ba76-e83e6dcc499b-kube-proxy\") pod \"kube-proxy-rdvkw\" (UID: \"9a783191-4a2c-42d8-ba76-e83e6dcc499b\") " pod="kube-system/kube-proxy-rdvkw" Nov 1 00:22:55.693493 kubelet[3182]: I1101 00:22:55.693363 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a783191-4a2c-42d8-ba76-e83e6dcc499b-xtables-lock\") pod \"kube-proxy-rdvkw\" (UID: \"9a783191-4a2c-42d8-ba76-e83e6dcc499b\") " pod="kube-system/kube-proxy-rdvkw" Nov 1 00:22:55.693493 kubelet[3182]: I1101 00:22:55.693385 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a783191-4a2c-42d8-ba76-e83e6dcc499b-lib-modules\") pod \"kube-proxy-rdvkw\" (UID: \"9a783191-4a2c-42d8-ba76-e83e6dcc499b\") " pod="kube-system/kube-proxy-rdvkw" Nov 1 00:22:55.693493 kubelet[3182]: I1101 00:22:55.693420 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95nhs\" (UniqueName: \"kubernetes.io/projected/9a783191-4a2c-42d8-ba76-e83e6dcc499b-kube-api-access-95nhs\") pod \"kube-proxy-rdvkw\" (UID: \"9a783191-4a2c-42d8-ba76-e83e6dcc499b\") " pod="kube-system/kube-proxy-rdvkw" Nov 1 00:22:55.810452 kubelet[3182]: E1101 00:22:55.810415 3182 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 1 00:22:55.810452 kubelet[3182]: E1101 00:22:55.810455 3182 projected.go:196] Error preparing data for projected volume kube-api-access-95nhs for pod kube-system/kube-proxy-rdvkw: configmap "kube-root-ca.crt" not found Nov 1 00:22:55.810840 kubelet[3182]: E1101 00:22:55.810560 3182 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9a783191-4a2c-42d8-ba76-e83e6dcc499b-kube-api-access-95nhs podName:9a783191-4a2c-42d8-ba76-e83e6dcc499b nodeName:}" failed. No retries permitted until 2025-11-01 00:22:56.310531397 +0000 UTC m=+5.871699309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-95nhs" (UniqueName: "kubernetes.io/projected/9a783191-4a2c-42d8-ba76-e83e6dcc499b-kube-api-access-95nhs") pod "kube-proxy-rdvkw" (UID: "9a783191-4a2c-42d8-ba76-e83e6dcc499b") : configmap "kube-root-ca.crt" not found Nov 1 00:22:56.177477 systemd[1]: Created slice kubepods-besteffort-pod50f6d7ee_2b17_492d_a5e7_e634afeaf3d0.slice - libcontainer container kubepods-besteffort-pod50f6d7ee_2b17_492d_a5e7_e634afeaf3d0.slice. Nov 1 00:22:56.201787 kubelet[3182]: I1101 00:22:56.201749 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/50f6d7ee-2b17-492d-a5e7-e634afeaf3d0-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-2dsbk\" (UID: \"50f6d7ee-2b17-492d-a5e7-e634afeaf3d0\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-2dsbk" Nov 1 00:22:56.201787 kubelet[3182]: I1101 00:22:56.201804 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsk85\" (UniqueName: \"kubernetes.io/projected/50f6d7ee-2b17-492d-a5e7-e634afeaf3d0-kube-api-access-dsk85\") pod \"tigera-operator-65cdcdfd6d-2dsbk\" (UID: \"50f6d7ee-2b17-492d-a5e7-e634afeaf3d0\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-2dsbk" Nov 1 00:22:56.491521 containerd[1985]: time="2025-11-01T00:22:56.490995538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-2dsbk,Uid:50f6d7ee-2b17-492d-a5e7-e634afeaf3d0,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:22:56.522364 containerd[1985]: time="2025-11-01T00:22:56.522167221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:56.522364 containerd[1985]: time="2025-11-01T00:22:56.522359362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:56.522581 containerd[1985]: time="2025-11-01T00:22:56.522396375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:56.522581 containerd[1985]: time="2025-11-01T00:22:56.522520200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:56.551992 systemd[1]: Started cri-containerd-b353223fe2f774742102acd0e65f5e581acd1bd88ba3bf9b36dbb52c41acb910.scope - libcontainer container b353223fe2f774742102acd0e65f5e581acd1bd88ba3bf9b36dbb52c41acb910. Nov 1 00:22:56.597867 containerd[1985]: time="2025-11-01T00:22:56.597798421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-2dsbk,Uid:50f6d7ee-2b17-492d-a5e7-e634afeaf3d0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b353223fe2f774742102acd0e65f5e581acd1bd88ba3bf9b36dbb52c41acb910\"" Nov 1 00:22:56.604045 containerd[1985]: time="2025-11-01T00:22:56.603695331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rdvkw,Uid:9a783191-4a2c-42d8-ba76-e83e6dcc499b,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:56.607866 containerd[1985]: time="2025-11-01T00:22:56.607821312Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:22:56.637593 containerd[1985]: time="2025-11-01T00:22:56.637306252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:56.637593 containerd[1985]: time="2025-11-01T00:22:56.637482925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:56.637593 containerd[1985]: time="2025-11-01T00:22:56.637499972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:56.638653 containerd[1985]: time="2025-11-01T00:22:56.638587933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:56.667977 systemd[1]: Started cri-containerd-4b513811afd4450bc4fc5debf529cf54e1e18d4ab5300a5e79f8b7c97b027cf6.scope - libcontainer container 4b513811afd4450bc4fc5debf529cf54e1e18d4ab5300a5e79f8b7c97b027cf6. Nov 1 00:22:56.695455 containerd[1985]: time="2025-11-01T00:22:56.695261083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rdvkw,Uid:9a783191-4a2c-42d8-ba76-e83e6dcc499b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b513811afd4450bc4fc5debf529cf54e1e18d4ab5300a5e79f8b7c97b027cf6\"" Nov 1 00:22:56.708699 containerd[1985]: time="2025-11-01T00:22:56.708663476Z" level=info msg="CreateContainer within sandbox \"4b513811afd4450bc4fc5debf529cf54e1e18d4ab5300a5e79f8b7c97b027cf6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:22:56.740115 containerd[1985]: time="2025-11-01T00:22:56.740052659Z" level=info msg="CreateContainer within sandbox \"4b513811afd4450bc4fc5debf529cf54e1e18d4ab5300a5e79f8b7c97b027cf6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"11542fa3600cd5bbc13947d76f6010bce74745b25fc60c418a704322133a88f8\"" Nov 1 00:22:56.740828 containerd[1985]: time="2025-11-01T00:22:56.740787354Z" level=info msg="StartContainer for \"11542fa3600cd5bbc13947d76f6010bce74745b25fc60c418a704322133a88f8\"" Nov 1 00:22:56.772259 systemd[1]: Started cri-containerd-11542fa3600cd5bbc13947d76f6010bce74745b25fc60c418a704322133a88f8.scope - libcontainer container 11542fa3600cd5bbc13947d76f6010bce74745b25fc60c418a704322133a88f8. Nov 1 00:22:56.818705 containerd[1985]: time="2025-11-01T00:22:56.818628422Z" level=info msg="StartContainer for \"11542fa3600cd5bbc13947d76f6010bce74745b25fc60c418a704322133a88f8\" returns successfully" Nov 1 00:22:58.048665 kubelet[3182]: I1101 00:22:58.048597 3182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rdvkw" podStartSLOduration=3.04745261 podStartE2EDuration="3.04745261s" podCreationTimestamp="2025-11-01 00:22:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:57.817831301 +0000 UTC m=+7.378999232" watchObservedRunningTime="2025-11-01 00:22:58.04745261 +0000 UTC m=+7.608620546" Nov 1 00:22:58.219570 update_engine[1964]: I20251101 00:22:58.218768 1964 update_attempter.cc:509] Updating boot flags... Nov 1 00:22:58.252773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2436528646.mount: Deactivated successfully. Nov 1 00:22:58.442927 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3485) Nov 1 00:22:59.392575 containerd[1985]: time="2025-11-01T00:22:59.392524745Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:59.397408 containerd[1985]: time="2025-11-01T00:22:59.397338121Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 1 00:22:59.401734 containerd[1985]: time="2025-11-01T00:22:59.400444931Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:59.408190 containerd[1985]: time="2025-11-01T00:22:59.406908191Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:59.408190 containerd[1985]: time="2025-11-01T00:22:59.407994925Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.800127026s" Nov 1 00:22:59.408190 containerd[1985]: time="2025-11-01T00:22:59.408058006Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 00:22:59.416173 containerd[1985]: time="2025-11-01T00:22:59.416119895Z" level=info msg="CreateContainer within sandbox \"b353223fe2f774742102acd0e65f5e581acd1bd88ba3bf9b36dbb52c41acb910\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:22:59.443914 containerd[1985]: time="2025-11-01T00:22:59.443864255Z" level=info msg="CreateContainer within sandbox \"b353223fe2f774742102acd0e65f5e581acd1bd88ba3bf9b36dbb52c41acb910\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6a9aa37636714bcc433a60b05eeadaac0bc47986d42fdf5486af8f7aaf0716eb\"" Nov 1 00:22:59.444812 containerd[1985]: time="2025-11-01T00:22:59.444778407Z" level=info msg="StartContainer for \"6a9aa37636714bcc433a60b05eeadaac0bc47986d42fdf5486af8f7aaf0716eb\"" Nov 1 00:22:59.486957 systemd[1]: Started cri-containerd-6a9aa37636714bcc433a60b05eeadaac0bc47986d42fdf5486af8f7aaf0716eb.scope - libcontainer container 6a9aa37636714bcc433a60b05eeadaac0bc47986d42fdf5486af8f7aaf0716eb. Nov 1 00:22:59.522092 containerd[1985]: time="2025-11-01T00:22:59.522045785Z" level=info msg="StartContainer for \"6a9aa37636714bcc433a60b05eeadaac0bc47986d42fdf5486af8f7aaf0716eb\" returns successfully" Nov 1 00:23:00.709600 kubelet[3182]: I1101 00:23:00.706360 3182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-2dsbk" podStartSLOduration=1.8976990379999998 podStartE2EDuration="4.706336829s" podCreationTimestamp="2025-11-01 00:22:56 +0000 UTC" firstStartedPulling="2025-11-01 00:22:56.600487654 +0000 UTC m=+6.161655566" lastFinishedPulling="2025-11-01 00:22:59.409125443 +0000 UTC m=+8.970293357" observedRunningTime="2025-11-01 00:22:59.825959438 +0000 UTC m=+9.387127371" watchObservedRunningTime="2025-11-01 00:23:00.706336829 +0000 UTC m=+10.267504763" Nov 1 00:23:07.725038 sudo[2304]: pam_unix(sudo:session): session closed for user root Nov 1 00:23:07.750448 sshd[2301]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:07.757590 systemd-logind[1963]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:23:07.760116 systemd[1]: sshd@6-172.31.30.202:22-139.178.89.65:44190.service: Deactivated successfully. Nov 1 00:23:07.768048 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:23:07.769315 systemd[1]: session-7.scope: Consumed 5.468s CPU time, 145.4M memory peak, 0B memory swap peak. Nov 1 00:23:07.775008 systemd-logind[1963]: Removed session 7. Nov 1 00:23:14.197335 systemd[1]: Created slice kubepods-besteffort-podb8ea6f30_73a1_439e_859a_7e0c0672a1f4.slice - libcontainer container kubepods-besteffort-podb8ea6f30_73a1_439e_859a_7e0c0672a1f4.slice. Nov 1 00:23:14.263054 kubelet[3182]: I1101 00:23:14.263006 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b8ea6f30-73a1-439e-859a-7e0c0672a1f4-typha-certs\") pod \"calico-typha-d587fd5f4-wpfct\" (UID: \"b8ea6f30-73a1-439e-859a-7e0c0672a1f4\") " pod="calico-system/calico-typha-d587fd5f4-wpfct" Nov 1 00:23:14.263054 kubelet[3182]: I1101 00:23:14.263068 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8ea6f30-73a1-439e-859a-7e0c0672a1f4-tigera-ca-bundle\") pod \"calico-typha-d587fd5f4-wpfct\" (UID: \"b8ea6f30-73a1-439e-859a-7e0c0672a1f4\") " pod="calico-system/calico-typha-d587fd5f4-wpfct" Nov 1 00:23:14.263054 kubelet[3182]: I1101 00:23:14.263100 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rb4k\" (UniqueName: \"kubernetes.io/projected/b8ea6f30-73a1-439e-859a-7e0c0672a1f4-kube-api-access-7rb4k\") pod \"calico-typha-d587fd5f4-wpfct\" (UID: \"b8ea6f30-73a1-439e-859a-7e0c0672a1f4\") " pod="calico-system/calico-typha-d587fd5f4-wpfct" Nov 1 00:23:14.300895 systemd[1]: Created slice kubepods-besteffort-pod288db5c0_e23c_4629_b76e_879367a9b7b9.slice - libcontainer container kubepods-besteffort-pod288db5c0_e23c_4629_b76e_879367a9b7b9.slice. Nov 1 00:23:14.364815 kubelet[3182]: I1101 00:23:14.363857 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/288db5c0-e23c-4629-b76e-879367a9b7b9-var-run-calico\") pod \"calico-node-n8c22\" (UID: \"288db5c0-e23c-4629-b76e-879367a9b7b9\") " pod="calico-system/calico-node-n8c22" Nov 1 00:23:14.364815 kubelet[3182]: I1101 00:23:14.363916 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/288db5c0-e23c-4629-b76e-879367a9b7b9-cni-bin-dir\") pod \"calico-node-n8c22\" (UID: \"288db5c0-e23c-4629-b76e-879367a9b7b9\") " pod="calico-system/calico-node-n8c22" Nov 1 00:23:14.364815 kubelet[3182]: I1101 00:23:14.363943 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/288db5c0-e23c-4629-b76e-879367a9b7b9-node-certs\") pod \"calico-node-n8c22\" (UID: \"288db5c0-e23c-4629-b76e-879367a9b7b9\") " pod="calico-system/calico-node-n8c22" Nov 1 00:23:14.364815 kubelet[3182]: I1101 00:23:14.363978 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdl8h\" (UniqueName: \"kubernetes.io/projected/288db5c0-e23c-4629-b76e-879367a9b7b9-kube-api-access-xdl8h\") pod \"calico-node-n8c22\" (UID: \"288db5c0-e23c-4629-b76e-879367a9b7b9\") " pod="calico-system/calico-node-n8c22" Nov 1 00:23:14.364815 kubelet[3182]: I1101 00:23:14.363999 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/288db5c0-e23c-4629-b76e-879367a9b7b9-cni-net-dir\") pod \"calico-node-n8c22\" (UID: \"288db5c0-e23c-4629-b76e-879367a9b7b9\") " pod="calico-system/calico-node-n8c22" Nov 1 00:23:14.365595 kubelet[3182]: I1101 00:23:14.364015 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/288db5c0-e23c-4629-b76e-879367a9b7b9-var-lib-calico\") pod \"calico-node-n8c22\" (UID: \"288db5c0-e23c-4629-b76e-879367a9b7b9\") " pod="calico-system/calico-node-n8c22" Nov 1 00:23:14.365595 kubelet[3182]: I1101 00:23:14.364030 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/288db5c0-e23c-4629-b76e-879367a9b7b9-tigera-ca-bundle\") pod \"calico-node-n8c22\" (UID: \"288db5c0-e23c-4629-b76e-879367a9b7b9\") " pod="calico-system/calico-node-n8c22" Nov 1 00:23:14.365595 kubelet[3182]: I1101 00:23:14.364050 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/288db5c0-e23c-4629-b76e-879367a9b7b9-cni-log-dir\") pod \"calico-node-n8c22\" (UID: \"288db5c0-e23c-4629-b76e-879367a9b7b9\") " pod="calico-system/calico-node-n8c22" Nov 1 00:23:14.365595 kubelet[3182]: I1101 00:23:14.364066 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/288db5c0-e23c-4629-b76e-879367a9b7b9-flexvol-driver-host\") pod \"calico-node-n8c22\" (UID: \"288db5c0-e23c-4629-b76e-879367a9b7b9\") " pod="calico-system/calico-node-n8c22" Nov 1 00:23:14.365595 kubelet[3182]: I1101 00:23:14.364083 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/288db5c0-e23c-4629-b76e-879367a9b7b9-lib-modules\") pod \"calico-node-n8c22\" (UID: \"288db5c0-e23c-4629-b76e-879367a9b7b9\") " pod="calico-system/calico-node-n8c22" Nov 1 00:23:14.366056 kubelet[3182]: I1101 00:23:14.364096 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/288db5c0-e23c-4629-b76e-879367a9b7b9-policysync\") pod \"calico-node-n8c22\" (UID: \"288db5c0-e23c-4629-b76e-879367a9b7b9\") " pod="calico-system/calico-node-n8c22" Nov 1 00:23:14.366056 kubelet[3182]: I1101 00:23:14.364110 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/288db5c0-e23c-4629-b76e-879367a9b7b9-xtables-lock\") pod \"calico-node-n8c22\" (UID: \"288db5c0-e23c-4629-b76e-879367a9b7b9\") " pod="calico-system/calico-node-n8c22" Nov 1 00:23:14.420622 kubelet[3182]: E1101 00:23:14.420578 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5cfdt" podUID="9d66f695-3c82-4cb4-ac8a-5f7c10006e53" Nov 1 00:23:14.467191 kubelet[3182]: I1101 00:23:14.465014 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9d66f695-3c82-4cb4-ac8a-5f7c10006e53-socket-dir\") pod \"csi-node-driver-5cfdt\" (UID: \"9d66f695-3c82-4cb4-ac8a-5f7c10006e53\") " pod="calico-system/csi-node-driver-5cfdt" Nov 1 00:23:14.467191 kubelet[3182]: I1101 00:23:14.465099 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9d66f695-3c82-4cb4-ac8a-5f7c10006e53-varrun\") pod \"csi-node-driver-5cfdt\" (UID: \"9d66f695-3c82-4cb4-ac8a-5f7c10006e53\") " pod="calico-system/csi-node-driver-5cfdt" Nov 1 00:23:14.467191 kubelet[3182]: I1101 00:23:14.465141 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx7tx\" (UniqueName: \"kubernetes.io/projected/9d66f695-3c82-4cb4-ac8a-5f7c10006e53-kube-api-access-gx7tx\") pod \"csi-node-driver-5cfdt\" (UID: \"9d66f695-3c82-4cb4-ac8a-5f7c10006e53\") " pod="calico-system/csi-node-driver-5cfdt" Nov 1 00:23:14.467191 kubelet[3182]: I1101 00:23:14.465179 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d66f695-3c82-4cb4-ac8a-5f7c10006e53-kubelet-dir\") pod \"csi-node-driver-5cfdt\" (UID: \"9d66f695-3c82-4cb4-ac8a-5f7c10006e53\") " pod="calico-system/csi-node-driver-5cfdt" Nov 1 00:23:14.467191 kubelet[3182]: I1101 00:23:14.465233 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9d66f695-3c82-4cb4-ac8a-5f7c10006e53-registration-dir\") pod \"csi-node-driver-5cfdt\" (UID: \"9d66f695-3c82-4cb4-ac8a-5f7c10006e53\") " pod="calico-system/csi-node-driver-5cfdt" Nov 1 00:23:14.470687 kubelet[3182]: E1101 00:23:14.470614 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.470946 kubelet[3182]: W1101 00:23:14.470926 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.471163 kubelet[3182]: E1101 00:23:14.471145 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.472183 kubelet[3182]: E1101 00:23:14.471871 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.472310 kubelet[3182]: W1101 00:23:14.472292 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.472413 kubelet[3182]: E1101 00:23:14.472400 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.476851 kubelet[3182]: E1101 00:23:14.476805 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.476851 kubelet[3182]: W1101 00:23:14.476828 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.476851 kubelet[3182]: E1101 00:23:14.476848 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.521473 kubelet[3182]: E1101 00:23:14.521439 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.521473 kubelet[3182]: W1101 00:23:14.521469 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.521650 kubelet[3182]: E1101 00:23:14.521494 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.527068 containerd[1985]: time="2025-11-01T00:23:14.527024500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d587fd5f4-wpfct,Uid:b8ea6f30-73a1-439e-859a-7e0c0672a1f4,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:14.566934 kubelet[3182]: E1101 00:23:14.566252 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.566934 kubelet[3182]: W1101 00:23:14.566280 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.566934 kubelet[3182]: E1101 00:23:14.566306 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.566934 kubelet[3182]: E1101 00:23:14.566685 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.566934 kubelet[3182]: W1101 00:23:14.566701 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.566934 kubelet[3182]: E1101 00:23:14.566719 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.569169 kubelet[3182]: E1101 00:23:14.568387 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.569169 kubelet[3182]: W1101 00:23:14.568404 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.569169 kubelet[3182]: E1101 00:23:14.568421 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.569169 kubelet[3182]: E1101 00:23:14.568773 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.569169 kubelet[3182]: W1101 00:23:14.568785 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.569169 kubelet[3182]: E1101 00:23:14.568799 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.569169 kubelet[3182]: E1101 00:23:14.569078 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.569169 kubelet[3182]: W1101 00:23:14.569088 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.569169 kubelet[3182]: E1101 00:23:14.569101 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.570015 kubelet[3182]: E1101 00:23:14.569567 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.570015 kubelet[3182]: W1101 00:23:14.569580 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.570015 kubelet[3182]: E1101 00:23:14.569593 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.570399 kubelet[3182]: E1101 00:23:14.570296 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.570399 kubelet[3182]: W1101 00:23:14.570314 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.570399 kubelet[3182]: E1101 00:23:14.570328 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.571970 kubelet[3182]: E1101 00:23:14.571784 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.571970 kubelet[3182]: W1101 00:23:14.571801 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.571970 kubelet[3182]: E1101 00:23:14.571817 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.572883 kubelet[3182]: E1101 00:23:14.572176 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.572883 kubelet[3182]: W1101 00:23:14.572187 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.572883 kubelet[3182]: E1101 00:23:14.572201 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.572883 kubelet[3182]: E1101 00:23:14.572781 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.572883 kubelet[3182]: W1101 00:23:14.572794 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.572883 kubelet[3182]: E1101 00:23:14.572808 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.576765 kubelet[3182]: E1101 00:23:14.574947 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.576765 kubelet[3182]: W1101 00:23:14.574964 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.576765 kubelet[3182]: E1101 00:23:14.574982 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.576765 kubelet[3182]: E1101 00:23:14.575358 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.576765 kubelet[3182]: W1101 00:23:14.575370 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.576765 kubelet[3182]: E1101 00:23:14.575384 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.576765 kubelet[3182]: E1101 00:23:14.575689 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.576765 kubelet[3182]: W1101 00:23:14.575699 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.576765 kubelet[3182]: E1101 00:23:14.575711 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.576765 kubelet[3182]: E1101 00:23:14.576070 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.577249 kubelet[3182]: W1101 00:23:14.576082 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.577249 kubelet[3182]: E1101 00:23:14.576094 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.577249 kubelet[3182]: E1101 00:23:14.576458 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.577249 kubelet[3182]: W1101 00:23:14.576468 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.577249 kubelet[3182]: E1101 00:23:14.576479 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.577249 kubelet[3182]: E1101 00:23:14.576990 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.577249 kubelet[3182]: W1101 00:23:14.577001 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.577249 kubelet[3182]: E1101 00:23:14.577015 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.578516 kubelet[3182]: E1101 00:23:14.577755 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.578516 kubelet[3182]: W1101 00:23:14.577769 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.578516 kubelet[3182]: E1101 00:23:14.577782 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.578516 kubelet[3182]: E1101 00:23:14.578470 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.578516 kubelet[3182]: W1101 00:23:14.578482 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.578516 kubelet[3182]: E1101 00:23:14.578495 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.580743 kubelet[3182]: E1101 00:23:14.579809 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.580743 kubelet[3182]: W1101 00:23:14.579823 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.580743 kubelet[3182]: E1101 00:23:14.579837 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.580743 kubelet[3182]: E1101 00:23:14.580064 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.580743 kubelet[3182]: W1101 00:23:14.580074 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.580743 kubelet[3182]: E1101 00:23:14.580087 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.582740 kubelet[3182]: E1101 00:23:14.581858 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.582740 kubelet[3182]: W1101 00:23:14.581874 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.582740 kubelet[3182]: E1101 00:23:14.581890 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.582740 kubelet[3182]: E1101 00:23:14.582232 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.582740 kubelet[3182]: W1101 00:23:14.582243 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.582740 kubelet[3182]: E1101 00:23:14.582256 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.582740 kubelet[3182]: E1101 00:23:14.582614 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.582740 kubelet[3182]: W1101 00:23:14.582626 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.582740 kubelet[3182]: E1101 00:23:14.582642 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.584049 kubelet[3182]: E1101 00:23:14.584029 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.584369 kubelet[3182]: W1101 00:23:14.584289 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.584369 kubelet[3182]: E1101 00:23:14.584313 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.585743 kubelet[3182]: E1101 00:23:14.584940 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.585743 kubelet[3182]: W1101 00:23:14.584958 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.585743 kubelet[3182]: E1101 00:23:14.584973 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.595415 containerd[1985]: time="2025-11-01T00:23:14.595262944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:14.597793 containerd[1985]: time="2025-11-01T00:23:14.595380182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:14.597793 containerd[1985]: time="2025-11-01T00:23:14.597409611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:14.597793 containerd[1985]: time="2025-11-01T00:23:14.597533437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:14.616315 kubelet[3182]: E1101 00:23:14.616281 3182 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:14.616315 kubelet[3182]: W1101 00:23:14.616309 3182 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:14.616521 kubelet[3182]: E1101 00:23:14.616333 3182 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:14.619462 containerd[1985]: time="2025-11-01T00:23:14.619417947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n8c22,Uid:288db5c0-e23c-4629-b76e-879367a9b7b9,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:14.697023 systemd[1]: Started cri-containerd-e9b271ea2cc8a744f17e8b2135408b55465b25e304ff2ee03fb060e8a890c13b.scope - libcontainer container e9b271ea2cc8a744f17e8b2135408b55465b25e304ff2ee03fb060e8a890c13b. Nov 1 00:23:14.705562 containerd[1985]: time="2025-11-01T00:23:14.705003724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:14.705562 containerd[1985]: time="2025-11-01T00:23:14.705085730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:14.705562 containerd[1985]: time="2025-11-01T00:23:14.705117208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:14.705562 containerd[1985]: time="2025-11-01T00:23:14.705226452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:14.736956 systemd[1]: Started cri-containerd-ee73ce11847ad2c12bd30941f6e6fb6f762c10174a5611ce74eb34a3d492a382.scope - libcontainer container ee73ce11847ad2c12bd30941f6e6fb6f762c10174a5611ce74eb34a3d492a382. Nov 1 00:23:14.784636 containerd[1985]: time="2025-11-01T00:23:14.784586185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n8c22,Uid:288db5c0-e23c-4629-b76e-879367a9b7b9,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee73ce11847ad2c12bd30941f6e6fb6f762c10174a5611ce74eb34a3d492a382\"" Nov 1 00:23:14.789374 containerd[1985]: time="2025-11-01T00:23:14.789337252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:23:14.833156 containerd[1985]: time="2025-11-01T00:23:14.831996560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d587fd5f4-wpfct,Uid:b8ea6f30-73a1-439e-859a-7e0c0672a1f4,Namespace:calico-system,Attempt:0,} returns sandbox id \"e9b271ea2cc8a744f17e8b2135408b55465b25e304ff2ee03fb060e8a890c13b\"" Nov 1 00:23:15.680557 kubelet[3182]: E1101 00:23:15.680497 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5cfdt" podUID="9d66f695-3c82-4cb4-ac8a-5f7c10006e53" Nov 1 00:23:16.080329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2674619567.mount: Deactivated successfully. Nov 1 00:23:16.357232 containerd[1985]: time="2025-11-01T00:23:16.357110085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:16.359514 containerd[1985]: time="2025-11-01T00:23:16.359178188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Nov 1 00:23:16.362627 containerd[1985]: time="2025-11-01T00:23:16.361528928Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:16.365473 containerd[1985]: time="2025-11-01T00:23:16.364773497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:16.365473 containerd[1985]: time="2025-11-01T00:23:16.365351120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.575972291s" Nov 1 00:23:16.365473 containerd[1985]: time="2025-11-01T00:23:16.365381740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 00:23:16.368087 containerd[1985]: time="2025-11-01T00:23:16.368051765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:23:16.373166 containerd[1985]: time="2025-11-01T00:23:16.373126827Z" level=info msg="CreateContainer within sandbox \"ee73ce11847ad2c12bd30941f6e6fb6f762c10174a5611ce74eb34a3d492a382\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:23:16.423010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1207271190.mount: Deactivated successfully. Nov 1 00:23:16.432573 containerd[1985]: time="2025-11-01T00:23:16.432478299Z" level=info msg="CreateContainer within sandbox \"ee73ce11847ad2c12bd30941f6e6fb6f762c10174a5611ce74eb34a3d492a382\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3a0248d831afff21bcdf3d1ae6969f634a3fe06484e8ed45396def8f96ea9348\"" Nov 1 00:23:16.433442 containerd[1985]: time="2025-11-01T00:23:16.433406094Z" level=info msg="StartContainer for \"3a0248d831afff21bcdf3d1ae6969f634a3fe06484e8ed45396def8f96ea9348\"" Nov 1 00:23:16.475464 systemd[1]: Started cri-containerd-3a0248d831afff21bcdf3d1ae6969f634a3fe06484e8ed45396def8f96ea9348.scope - libcontainer container 3a0248d831afff21bcdf3d1ae6969f634a3fe06484e8ed45396def8f96ea9348. Nov 1 00:23:16.510135 containerd[1985]: time="2025-11-01T00:23:16.509431606Z" level=info msg="StartContainer for \"3a0248d831afff21bcdf3d1ae6969f634a3fe06484e8ed45396def8f96ea9348\" returns successfully" Nov 1 00:23:16.520641 systemd[1]: cri-containerd-3a0248d831afff21bcdf3d1ae6969f634a3fe06484e8ed45396def8f96ea9348.scope: Deactivated successfully. Nov 1 00:23:16.551329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a0248d831afff21bcdf3d1ae6969f634a3fe06484e8ed45396def8f96ea9348-rootfs.mount: Deactivated successfully. Nov 1 00:23:16.593406 containerd[1985]: time="2025-11-01T00:23:16.582135976Z" level=info msg="shim disconnected" id=3a0248d831afff21bcdf3d1ae6969f634a3fe06484e8ed45396def8f96ea9348 namespace=k8s.io Nov 1 00:23:16.593406 containerd[1985]: time="2025-11-01T00:23:16.593379554Z" level=warning msg="cleaning up after shim disconnected" id=3a0248d831afff21bcdf3d1ae6969f634a3fe06484e8ed45396def8f96ea9348 namespace=k8s.io Nov 1 00:23:16.593406 containerd[1985]: time="2025-11-01T00:23:16.593399418Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:23:17.682362 kubelet[3182]: E1101 00:23:17.680903 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5cfdt" podUID="9d66f695-3c82-4cb4-ac8a-5f7c10006e53" Nov 1 00:23:18.775976 containerd[1985]: time="2025-11-01T00:23:18.775911620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:18.777805 containerd[1985]: time="2025-11-01T00:23:18.777751517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Nov 1 00:23:18.787462 containerd[1985]: time="2025-11-01T00:23:18.780315869Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:18.787893 containerd[1985]: time="2025-11-01T00:23:18.784241765Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.415904887s" Nov 1 00:23:18.787893 containerd[1985]: time="2025-11-01T00:23:18.787790268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 00:23:18.788597 containerd[1985]: time="2025-11-01T00:23:18.788565020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:18.790458 containerd[1985]: time="2025-11-01T00:23:18.790356196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:23:18.810717 containerd[1985]: time="2025-11-01T00:23:18.810677295Z" level=info msg="CreateContainer within sandbox \"e9b271ea2cc8a744f17e8b2135408b55465b25e304ff2ee03fb060e8a890c13b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:23:18.836605 containerd[1985]: time="2025-11-01T00:23:18.836558030Z" level=info msg="CreateContainer within sandbox \"e9b271ea2cc8a744f17e8b2135408b55465b25e304ff2ee03fb060e8a890c13b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fe82eddbd2d3dd3372fe1f62d645746a3fc63aecad6b9486cacdecb4493fcb4b\"" Nov 1 00:23:18.837767 containerd[1985]: time="2025-11-01T00:23:18.837424586Z" level=info msg="StartContainer for \"fe82eddbd2d3dd3372fe1f62d645746a3fc63aecad6b9486cacdecb4493fcb4b\"" Nov 1 00:23:18.892466 systemd[1]: Started cri-containerd-fe82eddbd2d3dd3372fe1f62d645746a3fc63aecad6b9486cacdecb4493fcb4b.scope - libcontainer container fe82eddbd2d3dd3372fe1f62d645746a3fc63aecad6b9486cacdecb4493fcb4b. Nov 1 00:23:18.947147 containerd[1985]: time="2025-11-01T00:23:18.947102039Z" level=info msg="StartContainer for \"fe82eddbd2d3dd3372fe1f62d645746a3fc63aecad6b9486cacdecb4493fcb4b\" returns successfully" Nov 1 00:23:19.680202 kubelet[3182]: E1101 00:23:19.680129 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5cfdt" podUID="9d66f695-3c82-4cb4-ac8a-5f7c10006e53" Nov 1 00:23:20.889996 kubelet[3182]: I1101 00:23:20.889948 3182 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:23:21.680352 kubelet[3182]: E1101 00:23:21.680293 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5cfdt" podUID="9d66f695-3c82-4cb4-ac8a-5f7c10006e53" Nov 1 00:23:22.539834 containerd[1985]: time="2025-11-01T00:23:22.539759597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:22.541747 containerd[1985]: time="2025-11-01T00:23:22.541567552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 1 00:23:22.543931 containerd[1985]: time="2025-11-01T00:23:22.543893656Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:22.564850 containerd[1985]: time="2025-11-01T00:23:22.564673281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:22.565746 containerd[1985]: time="2025-11-01T00:23:22.565294606Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.774899849s" Nov 1 00:23:22.565746 containerd[1985]: time="2025-11-01T00:23:22.565326363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 00:23:22.572245 containerd[1985]: time="2025-11-01T00:23:22.572198178Z" level=info msg="CreateContainer within sandbox \"ee73ce11847ad2c12bd30941f6e6fb6f762c10174a5611ce74eb34a3d492a382\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:23:22.600155 containerd[1985]: time="2025-11-01T00:23:22.600101459Z" level=info msg="CreateContainer within sandbox \"ee73ce11847ad2c12bd30941f6e6fb6f762c10174a5611ce74eb34a3d492a382\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c156f3541254f4430c60cd86d7e78251f0a7c958f9f7c3dbff68d5e7ebaab29c\"" Nov 1 00:23:22.600736 containerd[1985]: time="2025-11-01T00:23:22.600694451Z" level=info msg="StartContainer for \"c156f3541254f4430c60cd86d7e78251f0a7c958f9f7c3dbff68d5e7ebaab29c\"" Nov 1 00:23:22.649663 systemd[1]: run-containerd-runc-k8s.io-c156f3541254f4430c60cd86d7e78251f0a7c958f9f7c3dbff68d5e7ebaab29c-runc.uPpAgv.mount: Deactivated successfully. Nov 1 00:23:22.656954 systemd[1]: Started cri-containerd-c156f3541254f4430c60cd86d7e78251f0a7c958f9f7c3dbff68d5e7ebaab29c.scope - libcontainer container c156f3541254f4430c60cd86d7e78251f0a7c958f9f7c3dbff68d5e7ebaab29c. Nov 1 00:23:22.708352 containerd[1985]: time="2025-11-01T00:23:22.708308578Z" level=info msg="StartContainer for \"c156f3541254f4430c60cd86d7e78251f0a7c958f9f7c3dbff68d5e7ebaab29c\" returns successfully" Nov 1 00:23:22.927041 kubelet[3182]: I1101 00:23:22.925638 3182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-d587fd5f4-wpfct" podStartSLOduration=4.971341914 podStartE2EDuration="8.925622056s" podCreationTimestamp="2025-11-01 00:23:14 +0000 UTC" firstStartedPulling="2025-11-01 00:23:14.835254864 +0000 UTC m=+24.396422776" lastFinishedPulling="2025-11-01 00:23:18.789534993 +0000 UTC m=+28.350702918" observedRunningTime="2025-11-01 00:23:19.898798603 +0000 UTC m=+29.459966540" watchObservedRunningTime="2025-11-01 00:23:22.925622056 +0000 UTC m=+32.486789990" Nov 1 00:23:23.680777 kubelet[3182]: E1101 00:23:23.680694 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5cfdt" podUID="9d66f695-3c82-4cb4-ac8a-5f7c10006e53" Nov 1 00:23:23.857823 systemd[1]: cri-containerd-c156f3541254f4430c60cd86d7e78251f0a7c958f9f7c3dbff68d5e7ebaab29c.scope: Deactivated successfully. Nov 1 00:23:23.903514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c156f3541254f4430c60cd86d7e78251f0a7c958f9f7c3dbff68d5e7ebaab29c-rootfs.mount: Deactivated successfully. Nov 1 00:23:23.917176 containerd[1985]: time="2025-11-01T00:23:23.915136385Z" level=info msg="shim disconnected" id=c156f3541254f4430c60cd86d7e78251f0a7c958f9f7c3dbff68d5e7ebaab29c namespace=k8s.io Nov 1 00:23:23.917176 containerd[1985]: time="2025-11-01T00:23:23.917155209Z" level=warning msg="cleaning up after shim disconnected" id=c156f3541254f4430c60cd86d7e78251f0a7c958f9f7c3dbff68d5e7ebaab29c namespace=k8s.io Nov 1 00:23:23.917176 containerd[1985]: time="2025-11-01T00:23:23.917170493Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:23:23.925451 kubelet[3182]: I1101 00:23:23.925417 3182 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 1 00:23:23.985363 systemd[1]: Created slice kubepods-besteffort-pod3b1a064e_eaea_4078_a670_51fea2063bf7.slice - libcontainer container kubepods-besteffort-pod3b1a064e_eaea_4078_a670_51fea2063bf7.slice. Nov 1 00:23:24.006174 systemd[1]: Created slice kubepods-burstable-pod8eba1079_36a0_4f1b_a35a_7ac8d14e183b.slice - libcontainer container kubepods-burstable-pod8eba1079_36a0_4f1b_a35a_7ac8d14e183b.slice. Nov 1 00:23:24.018605 systemd[1]: Created slice kubepods-burstable-podccf197b2_b2cc_466e_947f_e45189c998df.slice - libcontainer container kubepods-burstable-podccf197b2_b2cc_466e_947f_e45189c998df.slice. Nov 1 00:23:24.034973 kubelet[3182]: I1101 00:23:24.034937 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b1a064e-eaea-4078-a670-51fea2063bf7-tigera-ca-bundle\") pod \"calico-kube-controllers-85c56f6579-hjmzt\" (UID: \"3b1a064e-eaea-4078-a670-51fea2063bf7\") " pod="calico-system/calico-kube-controllers-85c56f6579-hjmzt" Nov 1 00:23:24.035457 kubelet[3182]: I1101 00:23:24.035023 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zlrk\" (UniqueName: \"kubernetes.io/projected/8eba1079-36a0-4f1b-a35a-7ac8d14e183b-kube-api-access-8zlrk\") pod \"coredns-66bc5c9577-f2crj\" (UID: \"8eba1079-36a0-4f1b-a35a-7ac8d14e183b\") " pod="kube-system/coredns-66bc5c9577-f2crj" Nov 1 00:23:24.035457 kubelet[3182]: I1101 00:23:24.035064 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8eba1079-36a0-4f1b-a35a-7ac8d14e183b-config-volume\") pod \"coredns-66bc5c9577-f2crj\" (UID: \"8eba1079-36a0-4f1b-a35a-7ac8d14e183b\") " pod="kube-system/coredns-66bc5c9577-f2crj" Nov 1 00:23:24.035457 kubelet[3182]: I1101 00:23:24.035092 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8qnt\" (UniqueName: \"kubernetes.io/projected/3b1a064e-eaea-4078-a670-51fea2063bf7-kube-api-access-d8qnt\") pod \"calico-kube-controllers-85c56f6579-hjmzt\" (UID: \"3b1a064e-eaea-4078-a670-51fea2063bf7\") " pod="calico-system/calico-kube-controllers-85c56f6579-hjmzt" Nov 1 00:23:24.040779 systemd[1]: Created slice kubepods-besteffort-pod29fc9071_7019_4315_907a_15289e1e3c38.slice - libcontainer container kubepods-besteffort-pod29fc9071_7019_4315_907a_15289e1e3c38.slice. Nov 1 00:23:24.052448 systemd[1]: Created slice kubepods-besteffort-poda4244289_0ea7_4d4f_a667_210bd4cdc63c.slice - libcontainer container kubepods-besteffort-poda4244289_0ea7_4d4f_a667_210bd4cdc63c.slice. Nov 1 00:23:24.064455 systemd[1]: Created slice kubepods-besteffort-pod0daeebf2_097b_4237_b016_04ef974e7589.slice - libcontainer container kubepods-besteffort-pod0daeebf2_097b_4237_b016_04ef974e7589.slice. Nov 1 00:23:24.075952 systemd[1]: Created slice kubepods-besteffort-pod3d0071e7_dbca_4b76_a432_c8b1bb561ab0.slice - libcontainer container kubepods-besteffort-pod3d0071e7_dbca_4b76_a432_c8b1bb561ab0.slice. Nov 1 00:23:24.138934 kubelet[3182]: I1101 00:23:24.136196 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3d0071e7-dbca-4b76-a432-c8b1bb561ab0-goldmane-key-pair\") pod \"goldmane-7c778bb748-qq2mr\" (UID: \"3d0071e7-dbca-4b76-a432-c8b1bb561ab0\") " pod="calico-system/goldmane-7c778bb748-qq2mr" Nov 1 00:23:24.138934 kubelet[3182]: I1101 00:23:24.136245 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkjzt\" (UniqueName: \"kubernetes.io/projected/3d0071e7-dbca-4b76-a432-c8b1bb561ab0-kube-api-access-zkjzt\") pod \"goldmane-7c778bb748-qq2mr\" (UID: \"3d0071e7-dbca-4b76-a432-c8b1bb561ab0\") " pod="calico-system/goldmane-7c778bb748-qq2mr" Nov 1 00:23:24.138934 kubelet[3182]: I1101 00:23:24.136262 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r74t6\" (UniqueName: \"kubernetes.io/projected/0daeebf2-097b-4237-b016-04ef974e7589-kube-api-access-r74t6\") pod \"whisker-6976cb7758-krmm4\" (UID: \"0daeebf2-097b-4237-b016-04ef974e7589\") " pod="calico-system/whisker-6976cb7758-krmm4" Nov 1 00:23:24.138934 kubelet[3182]: I1101 00:23:24.136283 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k58vl\" (UniqueName: \"kubernetes.io/projected/29fc9071-7019-4315-907a-15289e1e3c38-kube-api-access-k58vl\") pod \"calico-apiserver-67d9f69bfb-kcfrc\" (UID: \"29fc9071-7019-4315-907a-15289e1e3c38\") " pod="calico-apiserver/calico-apiserver-67d9f69bfb-kcfrc" Nov 1 00:23:24.138934 kubelet[3182]: I1101 00:23:24.136322 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d0071e7-dbca-4b76-a432-c8b1bb561ab0-config\") pod \"goldmane-7c778bb748-qq2mr\" (UID: \"3d0071e7-dbca-4b76-a432-c8b1bb561ab0\") " pod="calico-system/goldmane-7c778bb748-qq2mr" Nov 1 00:23:24.139175 kubelet[3182]: I1101 00:23:24.136336 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0daeebf2-097b-4237-b016-04ef974e7589-whisker-ca-bundle\") pod \"whisker-6976cb7758-krmm4\" (UID: \"0daeebf2-097b-4237-b016-04ef974e7589\") " pod="calico-system/whisker-6976cb7758-krmm4" Nov 1 00:23:24.139175 kubelet[3182]: I1101 00:23:24.136363 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbg7g\" (UniqueName: \"kubernetes.io/projected/ccf197b2-b2cc-466e-947f-e45189c998df-kube-api-access-xbg7g\") pod \"coredns-66bc5c9577-cdpgq\" (UID: \"ccf197b2-b2cc-466e-947f-e45189c998df\") " pod="kube-system/coredns-66bc5c9577-cdpgq" Nov 1 00:23:24.139175 kubelet[3182]: I1101 00:23:24.136380 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tjmk\" (UniqueName: \"kubernetes.io/projected/a4244289-0ea7-4d4f-a667-210bd4cdc63c-kube-api-access-8tjmk\") pod \"calico-apiserver-67d9f69bfb-mczl8\" (UID: \"a4244289-0ea7-4d4f-a667-210bd4cdc63c\") " pod="calico-apiserver/calico-apiserver-67d9f69bfb-mczl8" Nov 1 00:23:24.139175 kubelet[3182]: I1101 00:23:24.136397 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccf197b2-b2cc-466e-947f-e45189c998df-config-volume\") pod \"coredns-66bc5c9577-cdpgq\" (UID: \"ccf197b2-b2cc-466e-947f-e45189c998df\") " pod="kube-system/coredns-66bc5c9577-cdpgq" Nov 1 00:23:24.139175 kubelet[3182]: I1101 00:23:24.136412 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a4244289-0ea7-4d4f-a667-210bd4cdc63c-calico-apiserver-certs\") pod \"calico-apiserver-67d9f69bfb-mczl8\" (UID: \"a4244289-0ea7-4d4f-a667-210bd4cdc63c\") " pod="calico-apiserver/calico-apiserver-67d9f69bfb-mczl8" Nov 1 00:23:24.143527 kubelet[3182]: I1101 00:23:24.136442 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d0071e7-dbca-4b76-a432-c8b1bb561ab0-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-qq2mr\" (UID: \"3d0071e7-dbca-4b76-a432-c8b1bb561ab0\") " pod="calico-system/goldmane-7c778bb748-qq2mr" Nov 1 00:23:24.143527 kubelet[3182]: I1101 00:23:24.136463 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0daeebf2-097b-4237-b016-04ef974e7589-whisker-backend-key-pair\") pod \"whisker-6976cb7758-krmm4\" (UID: \"0daeebf2-097b-4237-b016-04ef974e7589\") " pod="calico-system/whisker-6976cb7758-krmm4" Nov 1 00:23:24.143527 kubelet[3182]: I1101 00:23:24.136490 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/29fc9071-7019-4315-907a-15289e1e3c38-calico-apiserver-certs\") pod \"calico-apiserver-67d9f69bfb-kcfrc\" (UID: \"29fc9071-7019-4315-907a-15289e1e3c38\") " pod="calico-apiserver/calico-apiserver-67d9f69bfb-kcfrc" Nov 1 00:23:24.302554 containerd[1985]: time="2025-11-01T00:23:24.302405679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85c56f6579-hjmzt,Uid:3b1a064e-eaea-4078-a670-51fea2063bf7,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:24.327894 containerd[1985]: time="2025-11-01T00:23:24.327839005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-f2crj,Uid:8eba1079-36a0-4f1b-a35a-7ac8d14e183b,Namespace:kube-system,Attempt:0,}" Nov 1 00:23:24.330367 containerd[1985]: time="2025-11-01T00:23:24.330321099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cdpgq,Uid:ccf197b2-b2cc-466e-947f-e45189c998df,Namespace:kube-system,Attempt:0,}" Nov 1 00:23:24.354045 containerd[1985]: time="2025-11-01T00:23:24.353985134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67d9f69bfb-kcfrc,Uid:29fc9071-7019-4315-907a-15289e1e3c38,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:23:24.362658 containerd[1985]: time="2025-11-01T00:23:24.362381372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67d9f69bfb-mczl8,Uid:a4244289-0ea7-4d4f-a667-210bd4cdc63c,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:23:24.393687 containerd[1985]: time="2025-11-01T00:23:24.393626845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-qq2mr,Uid:3d0071e7-dbca-4b76-a432-c8b1bb561ab0,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:24.413427 containerd[1985]: time="2025-11-01T00:23:24.413375502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6976cb7758-krmm4,Uid:0daeebf2-097b-4237-b016-04ef974e7589,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:24.856291 containerd[1985]: time="2025-11-01T00:23:24.856079581Z" level=error msg="Failed to destroy network for sandbox \"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.856904 containerd[1985]: time="2025-11-01T00:23:24.856737186Z" level=error msg="Failed to destroy network for sandbox \"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.863763 containerd[1985]: time="2025-11-01T00:23:24.862750536Z" level=error msg="encountered an error cleaning up failed sandbox \"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.863763 containerd[1985]: time="2025-11-01T00:23:24.862847383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-qq2mr,Uid:3d0071e7-dbca-4b76-a432-c8b1bb561ab0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.863763 containerd[1985]: time="2025-11-01T00:23:24.862900945Z" level=error msg="Failed to destroy network for sandbox \"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.863763 containerd[1985]: time="2025-11-01T00:23:24.863337865Z" level=error msg="encountered an error cleaning up failed sandbox \"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.863763 containerd[1985]: time="2025-11-01T00:23:24.863405944Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67d9f69bfb-mczl8,Uid:a4244289-0ea7-4d4f-a667-210bd4cdc63c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.868756 containerd[1985]: time="2025-11-01T00:23:24.868686117Z" level=error msg="Failed to destroy network for sandbox \"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.869612 containerd[1985]: time="2025-11-01T00:23:24.869276556Z" level=error msg="encountered an error cleaning up failed sandbox \"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.869612 containerd[1985]: time="2025-11-01T00:23:24.869349545Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6976cb7758-krmm4,Uid:0daeebf2-097b-4237-b016-04ef974e7589,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.869612 containerd[1985]: time="2025-11-01T00:23:24.869513396Z" level=error msg="Failed to destroy network for sandbox \"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.870138 containerd[1985]: time="2025-11-01T00:23:24.870103923Z" level=error msg="encountered an error cleaning up failed sandbox \"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.870471 containerd[1985]: time="2025-11-01T00:23:24.870240356Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67d9f69bfb-kcfrc,Uid:29fc9071-7019-4315-907a-15289e1e3c38,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.870471 containerd[1985]: time="2025-11-01T00:23:24.862767213Z" level=error msg="encountered an error cleaning up failed sandbox \"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.870471 containerd[1985]: time="2025-11-01T00:23:24.870314581Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-f2crj,Uid:8eba1079-36a0-4f1b-a35a-7ac8d14e183b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.870471 containerd[1985]: time="2025-11-01T00:23:24.870339849Z" level=error msg="Failed to destroy network for sandbox \"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.870711 kubelet[3182]: E1101 00:23:24.870579 3182 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.870711 kubelet[3182]: E1101 00:23:24.870652 3182 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-f2crj" Nov 1 00:23:24.870711 kubelet[3182]: E1101 00:23:24.870680 3182 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-f2crj" Nov 1 00:23:24.870920 kubelet[3182]: E1101 00:23:24.870776 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-f2crj_kube-system(8eba1079-36a0-4f1b-a35a-7ac8d14e183b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-f2crj_kube-system(8eba1079-36a0-4f1b-a35a-7ac8d14e183b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-f2crj" podUID="8eba1079-36a0-4f1b-a35a-7ac8d14e183b" Nov 1 00:23:24.870920 kubelet[3182]: E1101 00:23:24.870841 3182 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.870920 kubelet[3182]: E1101 00:23:24.870866 3182 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-qq2mr" Nov 1 00:23:24.871222 kubelet[3182]: E1101 00:23:24.870885 3182 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-qq2mr" Nov 1 00:23:24.871222 kubelet[3182]: E1101 00:23:24.870926 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-qq2mr_calico-system(3d0071e7-dbca-4b76-a432-c8b1bb561ab0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-qq2mr_calico-system(3d0071e7-dbca-4b76-a432-c8b1bb561ab0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-qq2mr" podUID="3d0071e7-dbca-4b76-a432-c8b1bb561ab0" Nov 1 00:23:24.871222 kubelet[3182]: E1101 00:23:24.870972 3182 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.871389 kubelet[3182]: E1101 00:23:24.870996 3182 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67d9f69bfb-mczl8" Nov 1 00:23:24.871389 kubelet[3182]: E1101 00:23:24.871012 3182 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67d9f69bfb-mczl8" Nov 1 00:23:24.871389 kubelet[3182]: E1101 00:23:24.871047 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67d9f69bfb-mczl8_calico-apiserver(a4244289-0ea7-4d4f-a667-210bd4cdc63c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67d9f69bfb-mczl8_calico-apiserver(a4244289-0ea7-4d4f-a667-210bd4cdc63c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-mczl8" podUID="a4244289-0ea7-4d4f-a667-210bd4cdc63c" Nov 1 00:23:24.871538 kubelet[3182]: E1101 00:23:24.871099 3182 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.871538 kubelet[3182]: E1101 00:23:24.871119 3182 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6976cb7758-krmm4" Nov 1 00:23:24.871538 kubelet[3182]: E1101 00:23:24.871139 3182 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6976cb7758-krmm4" Nov 1 00:23:24.871658 kubelet[3182]: E1101 00:23:24.871178 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6976cb7758-krmm4_calico-system(0daeebf2-097b-4237-b016-04ef974e7589)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6976cb7758-krmm4_calico-system(0daeebf2-097b-4237-b016-04ef974e7589)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6976cb7758-krmm4" podUID="0daeebf2-097b-4237-b016-04ef974e7589" Nov 1 00:23:24.871658 kubelet[3182]: E1101 00:23:24.871270 3182 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.871658 kubelet[3182]: E1101 00:23:24.871293 3182 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67d9f69bfb-kcfrc" Nov 1 00:23:24.871830 kubelet[3182]: E1101 00:23:24.871314 3182 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67d9f69bfb-kcfrc" Nov 1 00:23:24.871830 kubelet[3182]: E1101 00:23:24.871350 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67d9f69bfb-kcfrc_calico-apiserver(29fc9071-7019-4315-907a-15289e1e3c38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67d9f69bfb-kcfrc_calico-apiserver(29fc9071-7019-4315-907a-15289e1e3c38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-kcfrc" podUID="29fc9071-7019-4315-907a-15289e1e3c38" Nov 1 00:23:24.875401 containerd[1985]: time="2025-11-01T00:23:24.872802456Z" level=error msg="encountered an error cleaning up failed sandbox \"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.875401 containerd[1985]: time="2025-11-01T00:23:24.872878061Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85c56f6579-hjmzt,Uid:3b1a064e-eaea-4078-a670-51fea2063bf7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.875567 kubelet[3182]: E1101 00:23:24.873115 3182 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.875567 kubelet[3182]: E1101 00:23:24.873172 3182 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85c56f6579-hjmzt" Nov 1 00:23:24.875567 kubelet[3182]: E1101 00:23:24.873197 3182 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85c56f6579-hjmzt" Nov 1 00:23:24.875671 kubelet[3182]: E1101 00:23:24.873252 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85c56f6579-hjmzt_calico-system(3b1a064e-eaea-4078-a670-51fea2063bf7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85c56f6579-hjmzt_calico-system(3b1a064e-eaea-4078-a670-51fea2063bf7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85c56f6579-hjmzt" podUID="3b1a064e-eaea-4078-a670-51fea2063bf7" Nov 1 00:23:24.877301 containerd[1985]: time="2025-11-01T00:23:24.862858370Z" level=error msg="Failed to destroy network for sandbox \"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.878292 containerd[1985]: time="2025-11-01T00:23:24.877786940Z" level=error msg="encountered an error cleaning up failed sandbox \"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.878292 containerd[1985]: time="2025-11-01T00:23:24.877889494Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cdpgq,Uid:ccf197b2-b2cc-466e-947f-e45189c998df,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.878586 kubelet[3182]: E1101 00:23:24.878540 3182 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:24.878710 kubelet[3182]: E1101 00:23:24.878593 3182 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-cdpgq" Nov 1 00:23:24.878710 kubelet[3182]: E1101 00:23:24.878623 3182 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-cdpgq" Nov 1 00:23:24.878710 kubelet[3182]: E1101 00:23:24.878691 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-cdpgq_kube-system(ccf197b2-b2cc-466e-947f-e45189c998df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-cdpgq_kube-system(ccf197b2-b2cc-466e-947f-e45189c998df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-cdpgq" podUID="ccf197b2-b2cc-466e-947f-e45189c998df" Nov 1 00:23:24.915090 kubelet[3182]: I1101 00:23:24.915058 3182 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" Nov 1 00:23:24.924020 kubelet[3182]: I1101 00:23:24.923981 3182 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" Nov 1 00:23:24.930874 containerd[1985]: time="2025-11-01T00:23:24.930823405Z" level=info msg="StopPodSandbox for \"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116\"" Nov 1 00:23:24.936233 containerd[1985]: time="2025-11-01T00:23:24.935350181Z" level=info msg="Ensure that sandbox 9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116 in task-service has been cleanup successfully" Nov 1 00:23:24.936233 containerd[1985]: time="2025-11-01T00:23:24.935767060Z" level=info msg="StopPodSandbox for \"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384\"" Nov 1 00:23:24.936233 containerd[1985]: time="2025-11-01T00:23:24.935966909Z" level=info msg="Ensure that sandbox 1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384 in task-service has been cleanup successfully" Nov 1 00:23:24.959811 containerd[1985]: time="2025-11-01T00:23:24.959650471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:23:24.964564 kubelet[3182]: I1101 00:23:24.963413 3182 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" Nov 1 00:23:24.964997 containerd[1985]: time="2025-11-01T00:23:24.964863143Z" level=info msg="StopPodSandbox for \"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9\"" Nov 1 00:23:24.967156 containerd[1985]: time="2025-11-01T00:23:24.967129471Z" level=info msg="Ensure that sandbox 019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9 in task-service has been cleanup successfully" Nov 1 00:23:24.970452 kubelet[3182]: I1101 00:23:24.970132 3182 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" Nov 1 00:23:24.970783 containerd[1985]: time="2025-11-01T00:23:24.970747607Z" level=info msg="StopPodSandbox for \"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625\"" Nov 1 00:23:24.971327 containerd[1985]: time="2025-11-01T00:23:24.971115865Z" level=info msg="Ensure that sandbox 7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625 in task-service has been cleanup successfully" Nov 1 00:23:24.982843 kubelet[3182]: I1101 00:23:24.982608 3182 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" Nov 1 00:23:24.994142 containerd[1985]: time="2025-11-01T00:23:24.994073568Z" level=info msg="StopPodSandbox for \"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806\"" Nov 1 00:23:24.994425 containerd[1985]: time="2025-11-01T00:23:24.994301169Z" level=info msg="Ensure that sandbox 1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806 in task-service has been cleanup successfully" Nov 1 00:23:25.000673 kubelet[3182]: I1101 00:23:24.999768 3182 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" Nov 1 00:23:25.004513 containerd[1985]: time="2025-11-01T00:23:25.003670832Z" level=info msg="StopPodSandbox for \"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530\"" Nov 1 00:23:25.019678 containerd[1985]: time="2025-11-01T00:23:25.019502954Z" level=info msg="Ensure that sandbox 57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530 in task-service has been cleanup successfully" Nov 1 00:23:25.022970 kubelet[3182]: I1101 00:23:25.022914 3182 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" Nov 1 00:23:25.030594 containerd[1985]: time="2025-11-01T00:23:25.029412384Z" level=info msg="StopPodSandbox for \"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f\"" Nov 1 00:23:25.030594 containerd[1985]: time="2025-11-01T00:23:25.029588529Z" level=info msg="Ensure that sandbox e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f in task-service has been cleanup successfully" Nov 1 00:23:25.047768 containerd[1985]: time="2025-11-01T00:23:25.047688447Z" level=error msg="StopPodSandbox for \"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384\" failed" error="failed to destroy network for sandbox \"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:25.049515 kubelet[3182]: E1101 00:23:25.049134 3182 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" Nov 1 00:23:25.049515 kubelet[3182]: E1101 00:23:25.049177 3182 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384"} Nov 1 00:23:25.049515 kubelet[3182]: E1101 00:23:25.049223 3182 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3d0071e7-dbca-4b76-a432-c8b1bb561ab0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:25.049515 kubelet[3182]: E1101 00:23:25.049249 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3d0071e7-dbca-4b76-a432-c8b1bb561ab0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-qq2mr" podUID="3d0071e7-dbca-4b76-a432-c8b1bb561ab0" Nov 1 00:23:25.058720 containerd[1985]: time="2025-11-01T00:23:25.058022886Z" level=error msg="StopPodSandbox for \"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625\" failed" error="failed to destroy network for sandbox \"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:25.059601 kubelet[3182]: E1101 00:23:25.059080 3182 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" Nov 1 00:23:25.059601 kubelet[3182]: E1101 00:23:25.059127 3182 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625"} Nov 1 00:23:25.059601 kubelet[3182]: E1101 00:23:25.059158 3182 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"29fc9071-7019-4315-907a-15289e1e3c38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:25.061822 kubelet[3182]: E1101 00:23:25.059190 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"29fc9071-7019-4315-907a-15289e1e3c38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-kcfrc" podUID="29fc9071-7019-4315-907a-15289e1e3c38" Nov 1 00:23:25.126044 containerd[1985]: time="2025-11-01T00:23:25.125648450Z" level=error msg="StopPodSandbox for \"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530\" failed" error="failed to destroy network for sandbox \"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:25.126147 kubelet[3182]: E1101 00:23:25.125896 3182 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" Nov 1 00:23:25.126147 kubelet[3182]: E1101 00:23:25.125938 3182 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530"} Nov 1 00:23:25.126147 kubelet[3182]: E1101 00:23:25.125967 3182 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8eba1079-36a0-4f1b-a35a-7ac8d14e183b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:25.126147 kubelet[3182]: E1101 00:23:25.125993 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8eba1079-36a0-4f1b-a35a-7ac8d14e183b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-f2crj" podUID="8eba1079-36a0-4f1b-a35a-7ac8d14e183b" Nov 1 00:23:25.130382 containerd[1985]: time="2025-11-01T00:23:25.129382980Z" level=error msg="StopPodSandbox for \"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116\" failed" error="failed to destroy network for sandbox \"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:25.130493 kubelet[3182]: E1101 00:23:25.130248 3182 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" Nov 1 00:23:25.130493 kubelet[3182]: E1101 00:23:25.130289 3182 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116"} Nov 1 00:23:25.130493 kubelet[3182]: E1101 00:23:25.130320 3182 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a4244289-0ea7-4d4f-a667-210bd4cdc63c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:25.130493 kubelet[3182]: E1101 00:23:25.130347 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a4244289-0ea7-4d4f-a667-210bd4cdc63c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-mczl8" podUID="a4244289-0ea7-4d4f-a667-210bd4cdc63c" Nov 1 00:23:25.140005 containerd[1985]: time="2025-11-01T00:23:25.139445217Z" level=error msg="StopPodSandbox for \"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9\" failed" error="failed to destroy network for sandbox \"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:25.140645 kubelet[3182]: E1101 00:23:25.140205 3182 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" Nov 1 00:23:25.140645 kubelet[3182]: E1101 00:23:25.140256 3182 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9"} Nov 1 00:23:25.140645 kubelet[3182]: E1101 00:23:25.140285 3182 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0daeebf2-097b-4237-b016-04ef974e7589\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:25.140645 kubelet[3182]: E1101 00:23:25.140321 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0daeebf2-097b-4237-b016-04ef974e7589\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6976cb7758-krmm4" podUID="0daeebf2-097b-4237-b016-04ef974e7589" Nov 1 00:23:25.151150 containerd[1985]: time="2025-11-01T00:23:25.151105894Z" level=error msg="StopPodSandbox for \"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f\" failed" error="failed to destroy network for sandbox \"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:25.152070 kubelet[3182]: E1101 00:23:25.151862 3182 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" Nov 1 00:23:25.152070 kubelet[3182]: E1101 00:23:25.151906 3182 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f"} Nov 1 00:23:25.152070 kubelet[3182]: E1101 00:23:25.151933 3182 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3b1a064e-eaea-4078-a670-51fea2063bf7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:25.152070 kubelet[3182]: E1101 00:23:25.151958 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3b1a064e-eaea-4078-a670-51fea2063bf7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85c56f6579-hjmzt" podUID="3b1a064e-eaea-4078-a670-51fea2063bf7" Nov 1 00:23:25.158914 containerd[1985]: time="2025-11-01T00:23:25.158872118Z" level=error msg="StopPodSandbox for \"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806\" failed" error="failed to destroy network for sandbox \"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:25.159125 kubelet[3182]: E1101 00:23:25.159086 3182 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" Nov 1 00:23:25.159188 kubelet[3182]: E1101 00:23:25.159138 3182 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806"} Nov 1 00:23:25.159188 kubelet[3182]: E1101 00:23:25.159165 3182 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ccf197b2-b2cc-466e-947f-e45189c998df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:25.159290 kubelet[3182]: E1101 00:23:25.159203 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ccf197b2-b2cc-466e-947f-e45189c998df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-cdpgq" podUID="ccf197b2-b2cc-466e-947f-e45189c998df" Nov 1 00:23:25.686926 systemd[1]: Created slice kubepods-besteffort-pod9d66f695_3c82_4cb4_ac8a_5f7c10006e53.slice - libcontainer container kubepods-besteffort-pod9d66f695_3c82_4cb4_ac8a_5f7c10006e53.slice. Nov 1 00:23:25.692944 containerd[1985]: time="2025-11-01T00:23:25.692904693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5cfdt,Uid:9d66f695-3c82-4cb4-ac8a-5f7c10006e53,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:25.782839 containerd[1985]: time="2025-11-01T00:23:25.782789807Z" level=error msg="Failed to destroy network for sandbox \"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:25.783383 containerd[1985]: time="2025-11-01T00:23:25.783155638Z" level=error msg="encountered an error cleaning up failed sandbox \"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:25.786526 containerd[1985]: time="2025-11-01T00:23:25.783407626Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5cfdt,Uid:9d66f695-3c82-4cb4-ac8a-5f7c10006e53,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:25.786640 kubelet[3182]: E1101 00:23:25.783700 3182 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:25.786640 kubelet[3182]: E1101 00:23:25.783772 3182 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5cfdt" Nov 1 00:23:25.786640 kubelet[3182]: E1101 00:23:25.783806 3182 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5cfdt" Nov 1 00:23:25.786887 kubelet[3182]: E1101 00:23:25.783875 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5cfdt_calico-system(9d66f695-3c82-4cb4-ac8a-5f7c10006e53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5cfdt_calico-system(9d66f695-3c82-4cb4-ac8a-5f7c10006e53)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5cfdt" podUID="9d66f695-3c82-4cb4-ac8a-5f7c10006e53" Nov 1 00:23:25.788760 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8-shm.mount: Deactivated successfully. Nov 1 00:23:26.026236 kubelet[3182]: I1101 00:23:26.026052 3182 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" Nov 1 00:23:26.027488 containerd[1985]: time="2025-11-01T00:23:26.026815394Z" level=info msg="StopPodSandbox for \"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8\"" Nov 1 00:23:26.027488 containerd[1985]: time="2025-11-01T00:23:26.027035179Z" level=info msg="Ensure that sandbox 86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8 in task-service has been cleanup successfully" Nov 1 00:23:26.057088 containerd[1985]: time="2025-11-01T00:23:26.057039888Z" level=error msg="StopPodSandbox for \"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8\" failed" error="failed to destroy network for sandbox \"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:26.057360 kubelet[3182]: E1101 00:23:26.057318 3182 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" Nov 1 00:23:26.057783 kubelet[3182]: E1101 00:23:26.057371 3182 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8"} Nov 1 00:23:26.057783 kubelet[3182]: E1101 00:23:26.057416 3182 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9d66f695-3c82-4cb4-ac8a-5f7c10006e53\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:26.057783 kubelet[3182]: E1101 00:23:26.057450 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9d66f695-3c82-4cb4-ac8a-5f7c10006e53\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5cfdt" podUID="9d66f695-3c82-4cb4-ac8a-5f7c10006e53" Nov 1 00:23:33.117398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2997749370.mount: Deactivated successfully. Nov 1 00:23:33.190792 containerd[1985]: time="2025-11-01T00:23:33.189753859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 1 00:23:33.217560 containerd[1985]: time="2025-11-01T00:23:33.217456656Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.250620211s" Nov 1 00:23:33.217560 containerd[1985]: time="2025-11-01T00:23:33.217524506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 00:23:33.231900 containerd[1985]: time="2025-11-01T00:23:33.231846425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:33.273556 containerd[1985]: time="2025-11-01T00:23:33.273514764Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:33.274237 containerd[1985]: time="2025-11-01T00:23:33.274196225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:33.301448 containerd[1985]: time="2025-11-01T00:23:33.301410878Z" level=info msg="CreateContainer within sandbox \"ee73ce11847ad2c12bd30941f6e6fb6f762c10174a5611ce74eb34a3d492a382\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:23:33.414407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount469803416.mount: Deactivated successfully. Nov 1 00:23:33.444465 containerd[1985]: time="2025-11-01T00:23:33.444409215Z" level=info msg="CreateContainer within sandbox \"ee73ce11847ad2c12bd30941f6e6fb6f762c10174a5611ce74eb34a3d492a382\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"63401efe00da6e4c6224a7081c99241ada4ba6b0a2ab13a06a94bbc2f7aff01b\"" Nov 1 00:23:33.445159 containerd[1985]: time="2025-11-01T00:23:33.445131646Z" level=info msg="StartContainer for \"63401efe00da6e4c6224a7081c99241ada4ba6b0a2ab13a06a94bbc2f7aff01b\"" Nov 1 00:23:33.539030 systemd[1]: Started cri-containerd-63401efe00da6e4c6224a7081c99241ada4ba6b0a2ab13a06a94bbc2f7aff01b.scope - libcontainer container 63401efe00da6e4c6224a7081c99241ada4ba6b0a2ab13a06a94bbc2f7aff01b. Nov 1 00:23:33.625918 containerd[1985]: time="2025-11-01T00:23:33.625685841Z" level=info msg="StartContainer for \"63401efe00da6e4c6224a7081c99241ada4ba6b0a2ab13a06a94bbc2f7aff01b\" returns successfully" Nov 1 00:23:33.778413 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:23:33.780064 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:23:34.121829 kubelet[3182]: I1101 00:23:34.114008 3182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n8c22" podStartSLOduration=1.648635573 podStartE2EDuration="20.080016382s" podCreationTimestamp="2025-11-01 00:23:14 +0000 UTC" firstStartedPulling="2025-11-01 00:23:14.788736025 +0000 UTC m=+24.349903951" lastFinishedPulling="2025-11-01 00:23:33.220116848 +0000 UTC m=+42.781284760" observedRunningTime="2025-11-01 00:23:34.076927404 +0000 UTC m=+43.638095339" watchObservedRunningTime="2025-11-01 00:23:34.080016382 +0000 UTC m=+43.641184320" Nov 1 00:23:34.286974 containerd[1985]: time="2025-11-01T00:23:34.286938889Z" level=info msg="StopPodSandbox for \"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9\"" Nov 1 00:23:34.870648 containerd[1985]: 2025-11-01 00:23:34.441 [INFO][4422] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" Nov 1 00:23:34.870648 containerd[1985]: 2025-11-01 00:23:34.444 [INFO][4422] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" iface="eth0" netns="/var/run/netns/cni-bb49d11e-8c01-e94d-56a7-8d2d928eb2f8" Nov 1 00:23:34.870648 containerd[1985]: 2025-11-01 00:23:34.447 [INFO][4422] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" iface="eth0" netns="/var/run/netns/cni-bb49d11e-8c01-e94d-56a7-8d2d928eb2f8" Nov 1 00:23:34.870648 containerd[1985]: 2025-11-01 00:23:34.452 [INFO][4422] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" iface="eth0" netns="/var/run/netns/cni-bb49d11e-8c01-e94d-56a7-8d2d928eb2f8" Nov 1 00:23:34.870648 containerd[1985]: 2025-11-01 00:23:34.453 [INFO][4422] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" Nov 1 00:23:34.870648 containerd[1985]: 2025-11-01 00:23:34.453 [INFO][4422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" Nov 1 00:23:34.870648 containerd[1985]: 2025-11-01 00:23:34.839 [INFO][4431] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" HandleID="k8s-pod-network.019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" Workload="ip--172--31--30--202-k8s-whisker--6976cb7758--krmm4-eth0" Nov 1 00:23:34.870648 containerd[1985]: 2025-11-01 00:23:34.845 [INFO][4431] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:34.870648 containerd[1985]: 2025-11-01 00:23:34.845 [INFO][4431] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:34.870648 containerd[1985]: 2025-11-01 00:23:34.864 [WARNING][4431] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" HandleID="k8s-pod-network.019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" Workload="ip--172--31--30--202-k8s-whisker--6976cb7758--krmm4-eth0" Nov 1 00:23:34.870648 containerd[1985]: 2025-11-01 00:23:34.864 [INFO][4431] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" HandleID="k8s-pod-network.019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" Workload="ip--172--31--30--202-k8s-whisker--6976cb7758--krmm4-eth0" Nov 1 00:23:34.870648 containerd[1985]: 2025-11-01 00:23:34.866 [INFO][4431] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:34.870648 containerd[1985]: 2025-11-01 00:23:34.868 [INFO][4422] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" Nov 1 00:23:34.871884 containerd[1985]: time="2025-11-01T00:23:34.871295333Z" level=info msg="TearDown network for sandbox \"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9\" successfully" Nov 1 00:23:34.871884 containerd[1985]: time="2025-11-01T00:23:34.871330837Z" level=info msg="StopPodSandbox for \"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9\" returns successfully" Nov 1 00:23:34.876401 systemd[1]: run-netns-cni\x2dbb49d11e\x2d8c01\x2de94d\x2d56a7\x2d8d2d928eb2f8.mount: Deactivated successfully. Nov 1 00:23:35.053950 kubelet[3182]: I1101 00:23:35.053866 3182 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0daeebf2-097b-4237-b016-04ef974e7589-whisker-ca-bundle\") pod \"0daeebf2-097b-4237-b016-04ef974e7589\" (UID: \"0daeebf2-097b-4237-b016-04ef974e7589\") " Nov 1 00:23:35.054684 kubelet[3182]: I1101 00:23:35.054402 3182 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r74t6\" (UniqueName: \"kubernetes.io/projected/0daeebf2-097b-4237-b016-04ef974e7589-kube-api-access-r74t6\") pod \"0daeebf2-097b-4237-b016-04ef974e7589\" (UID: \"0daeebf2-097b-4237-b016-04ef974e7589\") " Nov 1 00:23:35.054684 kubelet[3182]: I1101 00:23:35.054438 3182 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0daeebf2-097b-4237-b016-04ef974e7589-whisker-backend-key-pair\") pod \"0daeebf2-097b-4237-b016-04ef974e7589\" (UID: \"0daeebf2-097b-4237-b016-04ef974e7589\") " Nov 1 00:23:35.073773 systemd[1]: var-lib-kubelet-pods-0daeebf2\x2d097b\x2d4237\x2db016\x2d04ef974e7589-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:23:35.079045 systemd[1]: var-lib-kubelet-pods-0daeebf2\x2d097b\x2d4237\x2db016\x2d04ef974e7589-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr74t6.mount: Deactivated successfully. Nov 1 00:23:35.080464 kubelet[3182]: I1101 00:23:35.078771 3182 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0daeebf2-097b-4237-b016-04ef974e7589-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0daeebf2-097b-4237-b016-04ef974e7589" (UID: "0daeebf2-097b-4237-b016-04ef974e7589"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:23:35.080657 kubelet[3182]: I1101 00:23:35.080643 3182 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0daeebf2-097b-4237-b016-04ef974e7589-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0daeebf2-097b-4237-b016-04ef974e7589" (UID: "0daeebf2-097b-4237-b016-04ef974e7589"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:23:35.082661 kubelet[3182]: I1101 00:23:35.078661 3182 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0daeebf2-097b-4237-b016-04ef974e7589-kube-api-access-r74t6" (OuterVolumeSpecName: "kube-api-access-r74t6") pod "0daeebf2-097b-4237-b016-04ef974e7589" (UID: "0daeebf2-097b-4237-b016-04ef974e7589"). InnerVolumeSpecName "kube-api-access-r74t6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:23:35.154898 kubelet[3182]: I1101 00:23:35.154763 3182 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0daeebf2-097b-4237-b016-04ef974e7589-whisker-ca-bundle\") on node \"ip-172-31-30-202\" DevicePath \"\"" Nov 1 00:23:35.154898 kubelet[3182]: I1101 00:23:35.154815 3182 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r74t6\" (UniqueName: \"kubernetes.io/projected/0daeebf2-097b-4237-b016-04ef974e7589-kube-api-access-r74t6\") on node \"ip-172-31-30-202\" DevicePath \"\"" Nov 1 00:23:35.154898 kubelet[3182]: I1101 00:23:35.154829 3182 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0daeebf2-097b-4237-b016-04ef974e7589-whisker-backend-key-pair\") on node \"ip-172-31-30-202\" DevicePath \"\"" Nov 1 00:23:35.390905 systemd[1]: Removed slice kubepods-besteffort-pod0daeebf2_097b_4237_b016_04ef974e7589.slice - libcontainer container kubepods-besteffort-pod0daeebf2_097b_4237_b016_04ef974e7589.slice. Nov 1 00:23:35.571517 systemd[1]: Created slice kubepods-besteffort-pod7f37928f_30fa_48de_9724_092e451da4bf.slice - libcontainer container kubepods-besteffort-pod7f37928f_30fa_48de_9724_092e451da4bf.slice. Nov 1 00:23:35.665893 kubelet[3182]: I1101 00:23:35.665743 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7f37928f-30fa-48de-9724-092e451da4bf-whisker-backend-key-pair\") pod \"whisker-6659dc5f84-8hw6r\" (UID: \"7f37928f-30fa-48de-9724-092e451da4bf\") " pod="calico-system/whisker-6659dc5f84-8hw6r" Nov 1 00:23:35.665893 kubelet[3182]: I1101 00:23:35.665801 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f37928f-30fa-48de-9724-092e451da4bf-whisker-ca-bundle\") pod \"whisker-6659dc5f84-8hw6r\" (UID: \"7f37928f-30fa-48de-9724-092e451da4bf\") " pod="calico-system/whisker-6659dc5f84-8hw6r" Nov 1 00:23:35.665893 kubelet[3182]: I1101 00:23:35.665829 3182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gclm\" (UniqueName: \"kubernetes.io/projected/7f37928f-30fa-48de-9724-092e451da4bf-kube-api-access-2gclm\") pod \"whisker-6659dc5f84-8hw6r\" (UID: \"7f37928f-30fa-48de-9724-092e451da4bf\") " pod="calico-system/whisker-6659dc5f84-8hw6r" Nov 1 00:23:35.898583 containerd[1985]: time="2025-11-01T00:23:35.898458593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6659dc5f84-8hw6r,Uid:7f37928f-30fa-48de-9724-092e451da4bf,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:36.170443 (udev-worker)[4378]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:23:36.175519 systemd-networkd[1895]: cali5c7bd1ccd2d: Link UP Nov 1 00:23:36.175881 systemd-networkd[1895]: cali5c7bd1ccd2d: Gained carrier Nov 1 00:23:36.197977 containerd[1985]: 2025-11-01 00:23:35.990 [INFO][4560] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:36.197977 containerd[1985]: 2025-11-01 00:23:36.009 [INFO][4560] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--202-k8s-whisker--6659dc5f84--8hw6r-eth0 whisker-6659dc5f84- calico-system 7f37928f-30fa-48de-9724-092e451da4bf 935 0 2025-11-01 00:23:35 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6659dc5f84 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-30-202 whisker-6659dc5f84-8hw6r eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5c7bd1ccd2d [] [] }} ContainerID="7432fc8803d41be820d2ab4cc8fa49d3c4a38a0787b67f3e02352f8d3793cb2c" Namespace="calico-system" Pod="whisker-6659dc5f84-8hw6r" WorkloadEndpoint="ip--172--31--30--202-k8s-whisker--6659dc5f84--8hw6r-" Nov 1 00:23:36.197977 containerd[1985]: 2025-11-01 00:23:36.009 [INFO][4560] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7432fc8803d41be820d2ab4cc8fa49d3c4a38a0787b67f3e02352f8d3793cb2c" Namespace="calico-system" Pod="whisker-6659dc5f84-8hw6r" WorkloadEndpoint="ip--172--31--30--202-k8s-whisker--6659dc5f84--8hw6r-eth0" Nov 1 00:23:36.197977 containerd[1985]: 2025-11-01 00:23:36.071 [INFO][4572] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7432fc8803d41be820d2ab4cc8fa49d3c4a38a0787b67f3e02352f8d3793cb2c" HandleID="k8s-pod-network.7432fc8803d41be820d2ab4cc8fa49d3c4a38a0787b67f3e02352f8d3793cb2c" Workload="ip--172--31--30--202-k8s-whisker--6659dc5f84--8hw6r-eth0" Nov 1 00:23:36.197977 containerd[1985]: 2025-11-01 00:23:36.072 [INFO][4572] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7432fc8803d41be820d2ab4cc8fa49d3c4a38a0787b67f3e02352f8d3793cb2c" HandleID="k8s-pod-network.7432fc8803d41be820d2ab4cc8fa49d3c4a38a0787b67f3e02352f8d3793cb2c" Workload="ip--172--31--30--202-k8s-whisker--6659dc5f84--8hw6r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fbd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-202", "pod":"whisker-6659dc5f84-8hw6r", "timestamp":"2025-11-01 00:23:36.071349532 +0000 UTC"}, Hostname:"ip-172-31-30-202", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:36.197977 containerd[1985]: 2025-11-01 00:23:36.072 [INFO][4572] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:36.197977 containerd[1985]: 2025-11-01 00:23:36.072 [INFO][4572] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:36.197977 containerd[1985]: 2025-11-01 00:23:36.072 [INFO][4572] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-202' Nov 1 00:23:36.197977 containerd[1985]: 2025-11-01 00:23:36.094 [INFO][4572] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7432fc8803d41be820d2ab4cc8fa49d3c4a38a0787b67f3e02352f8d3793cb2c" host="ip-172-31-30-202" Nov 1 00:23:36.197977 containerd[1985]: 2025-11-01 00:23:36.113 [INFO][4572] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-202" Nov 1 00:23:36.197977 containerd[1985]: 2025-11-01 00:23:36.122 [INFO][4572] ipam/ipam.go 511: Trying affinity for 192.168.25.0/26 host="ip-172-31-30-202" Nov 1 00:23:36.197977 containerd[1985]: 2025-11-01 00:23:36.125 [INFO][4572] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.0/26 host="ip-172-31-30-202" Nov 1 00:23:36.197977 containerd[1985]: 2025-11-01 00:23:36.133 [INFO][4572] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.0/26 host="ip-172-31-30-202" Nov 1 00:23:36.197977 containerd[1985]: 2025-11-01 00:23:36.133 [INFO][4572] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.0/26 handle="k8s-pod-network.7432fc8803d41be820d2ab4cc8fa49d3c4a38a0787b67f3e02352f8d3793cb2c" host="ip-172-31-30-202" Nov 1 00:23:36.197977 containerd[1985]: 2025-11-01 00:23:36.135 [INFO][4572] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7432fc8803d41be820d2ab4cc8fa49d3c4a38a0787b67f3e02352f8d3793cb2c Nov 1 00:23:36.197977 containerd[1985]: 2025-11-01 00:23:36.141 [INFO][4572] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.0/26 handle="k8s-pod-network.7432fc8803d41be820d2ab4cc8fa49d3c4a38a0787b67f3e02352f8d3793cb2c" host="ip-172-31-30-202" Nov 1 00:23:36.197977 containerd[1985]: 2025-11-01 00:23:36.152 [INFO][4572] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.1/26] block=192.168.25.0/26 handle="k8s-pod-network.7432fc8803d41be820d2ab4cc8fa49d3c4a38a0787b67f3e02352f8d3793cb2c" host="ip-172-31-30-202" Nov 1 00:23:36.197977 containerd[1985]: 2025-11-01 00:23:36.152 [INFO][4572] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.1/26] handle="k8s-pod-network.7432fc8803d41be820d2ab4cc8fa49d3c4a38a0787b67f3e02352f8d3793cb2c" host="ip-172-31-30-202" Nov 1 00:23:36.197977 containerd[1985]: 2025-11-01 00:23:36.152 [INFO][4572] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:36.197977 containerd[1985]: 2025-11-01 00:23:36.152 [INFO][4572] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.1/26] IPv6=[] ContainerID="7432fc8803d41be820d2ab4cc8fa49d3c4a38a0787b67f3e02352f8d3793cb2c" HandleID="k8s-pod-network.7432fc8803d41be820d2ab4cc8fa49d3c4a38a0787b67f3e02352f8d3793cb2c" Workload="ip--172--31--30--202-k8s-whisker--6659dc5f84--8hw6r-eth0" Nov 1 00:23:36.198837 containerd[1985]: 2025-11-01 00:23:36.159 [INFO][4560] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7432fc8803d41be820d2ab4cc8fa49d3c4a38a0787b67f3e02352f8d3793cb2c" Namespace="calico-system" Pod="whisker-6659dc5f84-8hw6r" WorkloadEndpoint="ip--172--31--30--202-k8s-whisker--6659dc5f84--8hw6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-whisker--6659dc5f84--8hw6r-eth0", GenerateName:"whisker-6659dc5f84-", Namespace:"calico-system", SelfLink:"", UID:"7f37928f-30fa-48de-9724-092e451da4bf", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6659dc5f84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"", Pod:"whisker-6659dc5f84-8hw6r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.25.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5c7bd1ccd2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:36.198837 containerd[1985]: 2025-11-01 00:23:36.159 [INFO][4560] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.1/32] ContainerID="7432fc8803d41be820d2ab4cc8fa49d3c4a38a0787b67f3e02352f8d3793cb2c" Namespace="calico-system" Pod="whisker-6659dc5f84-8hw6r" WorkloadEndpoint="ip--172--31--30--202-k8s-whisker--6659dc5f84--8hw6r-eth0" Nov 1 00:23:36.198837 containerd[1985]: 2025-11-01 00:23:36.159 [INFO][4560] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c7bd1ccd2d ContainerID="7432fc8803d41be820d2ab4cc8fa49d3c4a38a0787b67f3e02352f8d3793cb2c" Namespace="calico-system" Pod="whisker-6659dc5f84-8hw6r" WorkloadEndpoint="ip--172--31--30--202-k8s-whisker--6659dc5f84--8hw6r-eth0" Nov 1 00:23:36.198837 containerd[1985]: 2025-11-01 00:23:36.171 [INFO][4560] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7432fc8803d41be820d2ab4cc8fa49d3c4a38a0787b67f3e02352f8d3793cb2c" Namespace="calico-system" Pod="whisker-6659dc5f84-8hw6r" WorkloadEndpoint="ip--172--31--30--202-k8s-whisker--6659dc5f84--8hw6r-eth0" Nov 1 00:23:36.198837 containerd[1985]: 2025-11-01 00:23:36.173 [INFO][4560] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7432fc8803d41be820d2ab4cc8fa49d3c4a38a0787b67f3e02352f8d3793cb2c" Namespace="calico-system" Pod="whisker-6659dc5f84-8hw6r" WorkloadEndpoint="ip--172--31--30--202-k8s-whisker--6659dc5f84--8hw6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-whisker--6659dc5f84--8hw6r-eth0", GenerateName:"whisker-6659dc5f84-", Namespace:"calico-system", SelfLink:"", UID:"7f37928f-30fa-48de-9724-092e451da4bf", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6659dc5f84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"7432fc8803d41be820d2ab4cc8fa49d3c4a38a0787b67f3e02352f8d3793cb2c", Pod:"whisker-6659dc5f84-8hw6r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.25.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5c7bd1ccd2d", MAC:"86:12:1a:6e:9a:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:36.198837 containerd[1985]: 2025-11-01 00:23:36.192 [INFO][4560] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7432fc8803d41be820d2ab4cc8fa49d3c4a38a0787b67f3e02352f8d3793cb2c" Namespace="calico-system" Pod="whisker-6659dc5f84-8hw6r" WorkloadEndpoint="ip--172--31--30--202-k8s-whisker--6659dc5f84--8hw6r-eth0" Nov 1 00:23:36.252773 containerd[1985]: time="2025-11-01T00:23:36.252606613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:36.253161 containerd[1985]: time="2025-11-01T00:23:36.253071913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:36.253161 containerd[1985]: time="2025-11-01T00:23:36.253137491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:36.253522 containerd[1985]: time="2025-11-01T00:23:36.253397996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:36.285968 systemd[1]: Started cri-containerd-7432fc8803d41be820d2ab4cc8fa49d3c4a38a0787b67f3e02352f8d3793cb2c.scope - libcontainer container 7432fc8803d41be820d2ab4cc8fa49d3c4a38a0787b67f3e02352f8d3793cb2c. Nov 1 00:23:36.355826 containerd[1985]: time="2025-11-01T00:23:36.355712316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6659dc5f84-8hw6r,Uid:7f37928f-30fa-48de-9724-092e451da4bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"7432fc8803d41be820d2ab4cc8fa49d3c4a38a0787b67f3e02352f8d3793cb2c\"" Nov 1 00:23:36.360974 containerd[1985]: time="2025-11-01T00:23:36.359547656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:23:36.629204 containerd[1985]: time="2025-11-01T00:23:36.629040755Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:36.648159 containerd[1985]: time="2025-11-01T00:23:36.631418610Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:23:36.648321 containerd[1985]: time="2025-11-01T00:23:36.633639366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:23:36.648550 kubelet[3182]: E1101 00:23:36.648495 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:36.654362 kubelet[3182]: E1101 00:23:36.654298 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:36.672883 kubelet[3182]: E1101 00:23:36.667423 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6659dc5f84-8hw6r_calico-system(7f37928f-30fa-48de-9724-092e451da4bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:36.673831 containerd[1985]: time="2025-11-01T00:23:36.673800862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:23:36.686557 kubelet[3182]: I1101 00:23:36.686089 3182 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0daeebf2-097b-4237-b016-04ef974e7589" path="/var/lib/kubelet/pods/0daeebf2-097b-4237-b016-04ef974e7589/volumes" Nov 1 00:23:36.687529 containerd[1985]: time="2025-11-01T00:23:36.687473966Z" level=info msg="StopPodSandbox for \"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806\"" Nov 1 00:23:36.687986 containerd[1985]: time="2025-11-01T00:23:36.687776043Z" level=info msg="StopPodSandbox for \"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f\"" Nov 1 00:23:36.836848 containerd[1985]: 2025-11-01 00:23:36.778 [INFO][4647] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" Nov 1 00:23:36.836848 containerd[1985]: 2025-11-01 00:23:36.778 [INFO][4647] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" iface="eth0" netns="/var/run/netns/cni-164a37dc-2b42-3fc2-b541-6b4d0152c964" Nov 1 00:23:36.836848 containerd[1985]: 2025-11-01 00:23:36.779 [INFO][4647] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" iface="eth0" netns="/var/run/netns/cni-164a37dc-2b42-3fc2-b541-6b4d0152c964" Nov 1 00:23:36.836848 containerd[1985]: 2025-11-01 00:23:36.780 [INFO][4647] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" iface="eth0" netns="/var/run/netns/cni-164a37dc-2b42-3fc2-b541-6b4d0152c964" Nov 1 00:23:36.836848 containerd[1985]: 2025-11-01 00:23:36.780 [INFO][4647] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" Nov 1 00:23:36.836848 containerd[1985]: 2025-11-01 00:23:36.780 [INFO][4647] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" Nov 1 00:23:36.836848 containerd[1985]: 2025-11-01 00:23:36.823 [INFO][4660] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" HandleID="k8s-pod-network.1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" Workload="ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-eth0" Nov 1 00:23:36.836848 containerd[1985]: 2025-11-01 00:23:36.823 [INFO][4660] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:36.836848 containerd[1985]: 2025-11-01 00:23:36.823 [INFO][4660] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:36.836848 containerd[1985]: 2025-11-01 00:23:36.831 [WARNING][4660] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" HandleID="k8s-pod-network.1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" Workload="ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-eth0" Nov 1 00:23:36.836848 containerd[1985]: 2025-11-01 00:23:36.831 [INFO][4660] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" HandleID="k8s-pod-network.1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" Workload="ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-eth0" Nov 1 00:23:36.836848 containerd[1985]: 2025-11-01 00:23:36.833 [INFO][4660] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:36.836848 containerd[1985]: 2025-11-01 00:23:36.834 [INFO][4647] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" Nov 1 00:23:36.839966 containerd[1985]: time="2025-11-01T00:23:36.837650307Z" level=info msg="TearDown network for sandbox \"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806\" successfully" Nov 1 00:23:36.839966 containerd[1985]: time="2025-11-01T00:23:36.837676781Z" level=info msg="StopPodSandbox for \"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806\" returns successfully" Nov 1 00:23:36.841925 systemd[1]: run-netns-cni\x2d164a37dc\x2d2b42\x2d3fc2\x2db541\x2d6b4d0152c964.mount: Deactivated successfully. Nov 1 00:23:36.846943 containerd[1985]: time="2025-11-01T00:23:36.846613216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cdpgq,Uid:ccf197b2-b2cc-466e-947f-e45189c998df,Namespace:kube-system,Attempt:1,}" Nov 1 00:23:36.857435 containerd[1985]: 2025-11-01 00:23:36.784 [INFO][4648] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" Nov 1 00:23:36.857435 containerd[1985]: 2025-11-01 00:23:36.785 [INFO][4648] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" iface="eth0" netns="/var/run/netns/cni-087f7c2e-abbd-450f-b5f5-cbc2a6d98b2a" Nov 1 00:23:36.857435 containerd[1985]: 2025-11-01 00:23:36.785 [INFO][4648] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" iface="eth0" netns="/var/run/netns/cni-087f7c2e-abbd-450f-b5f5-cbc2a6d98b2a" Nov 1 00:23:36.857435 containerd[1985]: 2025-11-01 00:23:36.785 [INFO][4648] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" iface="eth0" netns="/var/run/netns/cni-087f7c2e-abbd-450f-b5f5-cbc2a6d98b2a" Nov 1 00:23:36.857435 containerd[1985]: 2025-11-01 00:23:36.785 [INFO][4648] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" Nov 1 00:23:36.857435 containerd[1985]: 2025-11-01 00:23:36.786 [INFO][4648] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" Nov 1 00:23:36.857435 containerd[1985]: 2025-11-01 00:23:36.829 [INFO][4662] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" HandleID="k8s-pod-network.e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" Workload="ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-eth0" Nov 1 00:23:36.857435 containerd[1985]: 2025-11-01 00:23:36.829 [INFO][4662] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:36.857435 containerd[1985]: 2025-11-01 00:23:36.833 [INFO][4662] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:36.857435 containerd[1985]: 2025-11-01 00:23:36.843 [WARNING][4662] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" HandleID="k8s-pod-network.e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" Workload="ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-eth0" Nov 1 00:23:36.857435 containerd[1985]: 2025-11-01 00:23:36.844 [INFO][4662] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" HandleID="k8s-pod-network.e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" Workload="ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-eth0" Nov 1 00:23:36.857435 containerd[1985]: 2025-11-01 00:23:36.850 [INFO][4662] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:36.857435 containerd[1985]: 2025-11-01 00:23:36.854 [INFO][4648] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" Nov 1 00:23:36.861211 containerd[1985]: time="2025-11-01T00:23:36.858340534Z" level=info msg="TearDown network for sandbox \"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f\" successfully" Nov 1 00:23:36.861211 containerd[1985]: time="2025-11-01T00:23:36.858381060Z" level=info msg="StopPodSandbox for \"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f\" returns successfully" Nov 1 00:23:36.861161 systemd[1]: run-netns-cni\x2d087f7c2e\x2dabbd\x2d450f\x2db5f5\x2dcbc2a6d98b2a.mount: Deactivated successfully. Nov 1 00:23:36.865292 containerd[1985]: time="2025-11-01T00:23:36.864932826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85c56f6579-hjmzt,Uid:3b1a064e-eaea-4078-a670-51fea2063bf7,Namespace:calico-system,Attempt:1,}" Nov 1 00:23:36.942102 containerd[1985]: time="2025-11-01T00:23:36.941861675Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:36.944905 containerd[1985]: time="2025-11-01T00:23:36.944478385Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:23:36.944905 containerd[1985]: time="2025-11-01T00:23:36.944604231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:36.945533 kubelet[3182]: E1101 00:23:36.945378 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:36.945533 kubelet[3182]: E1101 00:23:36.945429 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:36.946078 kubelet[3182]: E1101 00:23:36.945653 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6659dc5f84-8hw6r_calico-system(7f37928f-30fa-48de-9724-092e451da4bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:36.946230 kubelet[3182]: E1101 00:23:36.945871 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6659dc5f84-8hw6r" podUID="7f37928f-30fa-48de-9724-092e451da4bf" Nov 1 00:23:37.107753 kubelet[3182]: E1101 00:23:37.107556 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6659dc5f84-8hw6r" podUID="7f37928f-30fa-48de-9724-092e451da4bf" Nov 1 00:23:37.166046 systemd-networkd[1895]: cali253026eee07: Link UP Nov 1 00:23:37.166252 systemd-networkd[1895]: cali253026eee07: Gained carrier Nov 1 00:23:37.173614 (udev-worker)[4380]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:23:37.215586 containerd[1985]: 2025-11-01 00:23:36.967 [INFO][4675] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:37.215586 containerd[1985]: 2025-11-01 00:23:36.991 [INFO][4675] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-eth0 calico-kube-controllers-85c56f6579- calico-system 3b1a064e-eaea-4078-a670-51fea2063bf7 948 0 2025-11-01 00:23:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85c56f6579 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-30-202 calico-kube-controllers-85c56f6579-hjmzt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali253026eee07 [] [] }} ContainerID="11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020" Namespace="calico-system" Pod="calico-kube-controllers-85c56f6579-hjmzt" WorkloadEndpoint="ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-" Nov 1 00:23:37.215586 containerd[1985]: 2025-11-01 00:23:36.991 [INFO][4675] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020" Namespace="calico-system" Pod="calico-kube-controllers-85c56f6579-hjmzt" WorkloadEndpoint="ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-eth0" Nov 1 00:23:37.215586 containerd[1985]: 2025-11-01 00:23:37.063 [INFO][4699] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020" HandleID="k8s-pod-network.11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020" Workload="ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-eth0" Nov 1 00:23:37.215586 containerd[1985]: 2025-11-01 00:23:37.064 [INFO][4699] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020" HandleID="k8s-pod-network.11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020" Workload="ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad370), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-202", "pod":"calico-kube-controllers-85c56f6579-hjmzt", "timestamp":"2025-11-01 00:23:37.063877463 +0000 UTC"}, Hostname:"ip-172-31-30-202", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:37.215586 containerd[1985]: 2025-11-01 00:23:37.064 [INFO][4699] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:37.215586 containerd[1985]: 2025-11-01 00:23:37.064 [INFO][4699] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:37.215586 containerd[1985]: 2025-11-01 00:23:37.064 [INFO][4699] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-202' Nov 1 00:23:37.215586 containerd[1985]: 2025-11-01 00:23:37.079 [INFO][4699] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020" host="ip-172-31-30-202" Nov 1 00:23:37.215586 containerd[1985]: 2025-11-01 00:23:37.085 [INFO][4699] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-202" Nov 1 00:23:37.215586 containerd[1985]: 2025-11-01 00:23:37.092 [INFO][4699] ipam/ipam.go 511: Trying affinity for 192.168.25.0/26 host="ip-172-31-30-202" Nov 1 00:23:37.215586 containerd[1985]: 2025-11-01 00:23:37.098 [INFO][4699] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.0/26 host="ip-172-31-30-202" Nov 1 00:23:37.215586 containerd[1985]: 2025-11-01 00:23:37.108 [INFO][4699] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.0/26 host="ip-172-31-30-202" Nov 1 00:23:37.215586 containerd[1985]: 2025-11-01 00:23:37.108 [INFO][4699] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.0/26 handle="k8s-pod-network.11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020" host="ip-172-31-30-202" Nov 1 00:23:37.215586 containerd[1985]: 2025-11-01 00:23:37.112 [INFO][4699] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020 Nov 1 00:23:37.215586 containerd[1985]: 2025-11-01 00:23:37.126 [INFO][4699] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.0/26 handle="k8s-pod-network.11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020" host="ip-172-31-30-202" Nov 1 00:23:37.215586 containerd[1985]: 2025-11-01 00:23:37.152 [INFO][4699] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.2/26] block=192.168.25.0/26 handle="k8s-pod-network.11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020" host="ip-172-31-30-202" Nov 1 00:23:37.215586 containerd[1985]: 2025-11-01 00:23:37.153 [INFO][4699] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.2/26] handle="k8s-pod-network.11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020" host="ip-172-31-30-202" Nov 1 00:23:37.215586 containerd[1985]: 2025-11-01 00:23:37.153 [INFO][4699] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:37.215586 containerd[1985]: 2025-11-01 00:23:37.153 [INFO][4699] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.2/26] IPv6=[] ContainerID="11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020" HandleID="k8s-pod-network.11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020" Workload="ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-eth0" Nov 1 00:23:37.216659 containerd[1985]: 2025-11-01 00:23:37.160 [INFO][4675] cni-plugin/k8s.go 418: Populated endpoint ContainerID="11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020" Namespace="calico-system" Pod="calico-kube-controllers-85c56f6579-hjmzt" WorkloadEndpoint="ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-eth0", GenerateName:"calico-kube-controllers-85c56f6579-", Namespace:"calico-system", SelfLink:"", UID:"3b1a064e-eaea-4078-a670-51fea2063bf7", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85c56f6579", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"", Pod:"calico-kube-controllers-85c56f6579-hjmzt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali253026eee07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:37.216659 containerd[1985]: 2025-11-01 00:23:37.161 [INFO][4675] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.2/32] ContainerID="11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020" Namespace="calico-system" Pod="calico-kube-controllers-85c56f6579-hjmzt" WorkloadEndpoint="ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-eth0" Nov 1 00:23:37.216659 containerd[1985]: 2025-11-01 00:23:37.161 [INFO][4675] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali253026eee07 ContainerID="11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020" Namespace="calico-system" Pod="calico-kube-controllers-85c56f6579-hjmzt" WorkloadEndpoint="ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-eth0" Nov 1 00:23:37.216659 containerd[1985]: 2025-11-01 00:23:37.164 [INFO][4675] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020" Namespace="calico-system" Pod="calico-kube-controllers-85c56f6579-hjmzt" WorkloadEndpoint="ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-eth0" Nov 1 00:23:37.216659 containerd[1985]: 2025-11-01 00:23:37.167 [INFO][4675] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020" Namespace="calico-system" Pod="calico-kube-controllers-85c56f6579-hjmzt" WorkloadEndpoint="ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-eth0", GenerateName:"calico-kube-controllers-85c56f6579-", Namespace:"calico-system", SelfLink:"", UID:"3b1a064e-eaea-4078-a670-51fea2063bf7", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85c56f6579", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020", Pod:"calico-kube-controllers-85c56f6579-hjmzt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali253026eee07", MAC:"aa:be:b2:60:7b:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:37.216659 containerd[1985]: 2025-11-01 00:23:37.212 [INFO][4675] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020" Namespace="calico-system" Pod="calico-kube-controllers-85c56f6579-hjmzt" WorkloadEndpoint="ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-eth0" Nov 1 00:23:37.264784 containerd[1985]: time="2025-11-01T00:23:37.260128405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:37.264784 containerd[1985]: time="2025-11-01T00:23:37.260241095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:37.264784 containerd[1985]: time="2025-11-01T00:23:37.260267308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:37.264784 containerd[1985]: time="2025-11-01T00:23:37.260405323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:37.325333 systemd[1]: Started cri-containerd-11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020.scope - libcontainer container 11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020. Nov 1 00:23:37.360361 systemd-networkd[1895]: cali168c71da916: Link UP Nov 1 00:23:37.361578 systemd-networkd[1895]: cali168c71da916: Gained carrier Nov 1 00:23:37.363985 systemd-networkd[1895]: cali5c7bd1ccd2d: Gained IPv6LL Nov 1 00:23:37.406680 containerd[1985]: 2025-11-01 00:23:36.987 [INFO][4674] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:37.406680 containerd[1985]: 2025-11-01 00:23:37.006 [INFO][4674] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-eth0 coredns-66bc5c9577- kube-system ccf197b2-b2cc-466e-947f-e45189c998df 947 0 2025-11-01 00:22:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-202 coredns-66bc5c9577-cdpgq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali168c71da916 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1" Namespace="kube-system" Pod="coredns-66bc5c9577-cdpgq" WorkloadEndpoint="ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-" Nov 1 00:23:37.406680 containerd[1985]: 2025-11-01 00:23:37.006 [INFO][4674] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1" Namespace="kube-system" Pod="coredns-66bc5c9577-cdpgq" WorkloadEndpoint="ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-eth0" Nov 1 00:23:37.406680 containerd[1985]: 2025-11-01 00:23:37.077 [INFO][4704] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1" HandleID="k8s-pod-network.94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1" Workload="ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-eth0" Nov 1 00:23:37.406680 containerd[1985]: 2025-11-01 00:23:37.077 [INFO][4704] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1" HandleID="k8s-pod-network.94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1" Workload="ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-202", "pod":"coredns-66bc5c9577-cdpgq", "timestamp":"2025-11-01 00:23:37.076993776 +0000 UTC"}, Hostname:"ip-172-31-30-202", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:37.406680 containerd[1985]: 2025-11-01 00:23:37.078 [INFO][4704] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:37.406680 containerd[1985]: 2025-11-01 00:23:37.154 [INFO][4704] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:37.406680 containerd[1985]: 2025-11-01 00:23:37.154 [INFO][4704] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-202' Nov 1 00:23:37.406680 containerd[1985]: 2025-11-01 00:23:37.206 [INFO][4704] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1" host="ip-172-31-30-202" Nov 1 00:23:37.406680 containerd[1985]: 2025-11-01 00:23:37.232 [INFO][4704] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-202" Nov 1 00:23:37.406680 containerd[1985]: 2025-11-01 00:23:37.246 [INFO][4704] ipam/ipam.go 511: Trying affinity for 192.168.25.0/26 host="ip-172-31-30-202" Nov 1 00:23:37.406680 containerd[1985]: 2025-11-01 00:23:37.251 [INFO][4704] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.0/26 host="ip-172-31-30-202" Nov 1 00:23:37.406680 containerd[1985]: 2025-11-01 00:23:37.284 [INFO][4704] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.0/26 host="ip-172-31-30-202" Nov 1 00:23:37.406680 containerd[1985]: 2025-11-01 00:23:37.284 [INFO][4704] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.0/26 handle="k8s-pod-network.94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1" host="ip-172-31-30-202" Nov 1 00:23:37.406680 containerd[1985]: 2025-11-01 00:23:37.289 [INFO][4704] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1 Nov 1 00:23:37.406680 containerd[1985]: 2025-11-01 00:23:37.314 [INFO][4704] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.0/26 handle="k8s-pod-network.94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1" host="ip-172-31-30-202" Nov 1 00:23:37.406680 containerd[1985]: 2025-11-01 00:23:37.343 [INFO][4704] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.3/26] block=192.168.25.0/26 handle="k8s-pod-network.94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1" host="ip-172-31-30-202" Nov 1 00:23:37.406680 containerd[1985]: 2025-11-01 00:23:37.343 [INFO][4704] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.3/26] handle="k8s-pod-network.94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1" host="ip-172-31-30-202" Nov 1 00:23:37.406680 containerd[1985]: 2025-11-01 00:23:37.344 [INFO][4704] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:37.406680 containerd[1985]: 2025-11-01 00:23:37.344 [INFO][4704] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.3/26] IPv6=[] ContainerID="94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1" HandleID="k8s-pod-network.94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1" Workload="ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-eth0" Nov 1 00:23:37.408475 containerd[1985]: 2025-11-01 00:23:37.350 [INFO][4674] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1" Namespace="kube-system" Pod="coredns-66bc5c9577-cdpgq" WorkloadEndpoint="ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ccf197b2-b2cc-466e-947f-e45189c998df", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"", Pod:"coredns-66bc5c9577-cdpgq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali168c71da916", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:37.408475 containerd[1985]: 2025-11-01 00:23:37.350 [INFO][4674] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.3/32] ContainerID="94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1" Namespace="kube-system" Pod="coredns-66bc5c9577-cdpgq" WorkloadEndpoint="ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-eth0" Nov 1 00:23:37.408475 containerd[1985]: 2025-11-01 00:23:37.350 [INFO][4674] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali168c71da916 ContainerID="94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1" Namespace="kube-system" Pod="coredns-66bc5c9577-cdpgq" WorkloadEndpoint="ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-eth0" Nov 1 00:23:37.408475 containerd[1985]: 2025-11-01 00:23:37.362 [INFO][4674] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1" Namespace="kube-system" Pod="coredns-66bc5c9577-cdpgq" WorkloadEndpoint="ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-eth0" Nov 1 00:23:37.408475 containerd[1985]: 2025-11-01 00:23:37.365 [INFO][4674] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1" Namespace="kube-system" Pod="coredns-66bc5c9577-cdpgq" WorkloadEndpoint="ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ccf197b2-b2cc-466e-947f-e45189c998df", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1", Pod:"coredns-66bc5c9577-cdpgq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali168c71da916", MAC:"ba:e4:8c:fb:53:b0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:37.408475 containerd[1985]: 2025-11-01 00:23:37.400 [INFO][4674] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1" Namespace="kube-system" Pod="coredns-66bc5c9577-cdpgq" WorkloadEndpoint="ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-eth0" Nov 1 00:23:37.447042 containerd[1985]: time="2025-11-01T00:23:37.446677190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:37.447672 containerd[1985]: time="2025-11-01T00:23:37.447626636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:37.448044 containerd[1985]: time="2025-11-01T00:23:37.448001878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:37.448284 containerd[1985]: time="2025-11-01T00:23:37.448249794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:37.496083 systemd[1]: Started cri-containerd-94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1.scope - libcontainer container 94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1. Nov 1 00:23:37.595149 containerd[1985]: time="2025-11-01T00:23:37.594344336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cdpgq,Uid:ccf197b2-b2cc-466e-947f-e45189c998df,Namespace:kube-system,Attempt:1,} returns sandbox id \"94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1\"" Nov 1 00:23:37.605616 containerd[1985]: time="2025-11-01T00:23:37.605574006Z" level=info msg="CreateContainer within sandbox \"94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:23:37.647956 containerd[1985]: time="2025-11-01T00:23:37.647811768Z" level=info msg="CreateContainer within sandbox \"94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7f2294a5b260d8be9daf24539d26ca77e820c0300f545a9ed5954b998e0bea7f\"" Nov 1 00:23:37.652849 containerd[1985]: time="2025-11-01T00:23:37.652807751Z" level=info msg="StartContainer for \"7f2294a5b260d8be9daf24539d26ca77e820c0300f545a9ed5954b998e0bea7f\"" Nov 1 00:23:37.674479 containerd[1985]: time="2025-11-01T00:23:37.673919939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85c56f6579-hjmzt,Uid:3b1a064e-eaea-4078-a670-51fea2063bf7,Namespace:calico-system,Attempt:1,} returns sandbox id \"11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020\"" Nov 1 00:23:37.678517 containerd[1985]: time="2025-11-01T00:23:37.678478744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:23:37.685380 containerd[1985]: time="2025-11-01T00:23:37.685137605Z" level=info msg="StopPodSandbox for \"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625\"" Nov 1 00:23:37.695411 containerd[1985]: time="2025-11-01T00:23:37.693409647Z" level=info msg="StopPodSandbox for \"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530\"" Nov 1 00:23:37.704832 containerd[1985]: time="2025-11-01T00:23:37.703489362Z" level=info msg="StopPodSandbox for \"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8\"" Nov 1 00:23:37.772577 systemd[1]: Started cri-containerd-7f2294a5b260d8be9daf24539d26ca77e820c0300f545a9ed5954b998e0bea7f.scope - libcontainer container 7f2294a5b260d8be9daf24539d26ca77e820c0300f545a9ed5954b998e0bea7f. Nov 1 00:23:37.912377 containerd[1985]: time="2025-11-01T00:23:37.912322136Z" level=info msg="StartContainer for \"7f2294a5b260d8be9daf24539d26ca77e820c0300f545a9ed5954b998e0bea7f\" returns successfully" Nov 1 00:23:38.038673 containerd[1985]: 2025-11-01 00:23:37.901 [INFO][4865] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" Nov 1 00:23:38.038673 containerd[1985]: 2025-11-01 00:23:37.902 [INFO][4865] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" iface="eth0" netns="/var/run/netns/cni-4b9f83cd-b3c2-8e81-7142-eeeb1e0d4529" Nov 1 00:23:38.038673 containerd[1985]: 2025-11-01 00:23:37.902 [INFO][4865] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" iface="eth0" netns="/var/run/netns/cni-4b9f83cd-b3c2-8e81-7142-eeeb1e0d4529" Nov 1 00:23:38.038673 containerd[1985]: 2025-11-01 00:23:37.903 [INFO][4865] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" iface="eth0" netns="/var/run/netns/cni-4b9f83cd-b3c2-8e81-7142-eeeb1e0d4529" Nov 1 00:23:38.038673 containerd[1985]: 2025-11-01 00:23:37.903 [INFO][4865] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" Nov 1 00:23:38.038673 containerd[1985]: 2025-11-01 00:23:37.903 [INFO][4865] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" Nov 1 00:23:38.038673 containerd[1985]: 2025-11-01 00:23:37.998 [INFO][4901] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" HandleID="k8s-pod-network.7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" Workload="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-eth0" Nov 1 00:23:38.038673 containerd[1985]: 2025-11-01 00:23:37.998 [INFO][4901] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:38.038673 containerd[1985]: 2025-11-01 00:23:37.998 [INFO][4901] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:38.038673 containerd[1985]: 2025-11-01 00:23:38.018 [WARNING][4901] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" HandleID="k8s-pod-network.7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" Workload="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-eth0" Nov 1 00:23:38.038673 containerd[1985]: 2025-11-01 00:23:38.018 [INFO][4901] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" HandleID="k8s-pod-network.7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" Workload="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-eth0" Nov 1 00:23:38.038673 containerd[1985]: 2025-11-01 00:23:38.025 [INFO][4901] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:38.038673 containerd[1985]: 2025-11-01 00:23:38.032 [INFO][4865] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" Nov 1 00:23:38.040324 containerd[1985]: time="2025-11-01T00:23:38.039358604Z" level=info msg="TearDown network for sandbox \"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625\" successfully" Nov 1 00:23:38.040324 containerd[1985]: time="2025-11-01T00:23:38.039401324Z" level=info msg="StopPodSandbox for \"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625\" returns successfully" Nov 1 00:23:38.047568 containerd[1985]: time="2025-11-01T00:23:38.047515212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67d9f69bfb-kcfrc,Uid:29fc9071-7019-4315-907a-15289e1e3c38,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:23:38.128899 kubelet[3182]: E1101 00:23:38.128814 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6659dc5f84-8hw6r" podUID="7f37928f-30fa-48de-9724-092e451da4bf" Nov 1 00:23:38.187587 containerd[1985]: 2025-11-01 00:23:38.012 [INFO][4877] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" Nov 1 00:23:38.187587 containerd[1985]: 2025-11-01 00:23:38.013 [INFO][4877] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" iface="eth0" netns="/var/run/netns/cni-8fba24ca-58e4-a538-9d53-4a6b643cc93c" Nov 1 00:23:38.187587 containerd[1985]: 2025-11-01 00:23:38.013 [INFO][4877] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" iface="eth0" netns="/var/run/netns/cni-8fba24ca-58e4-a538-9d53-4a6b643cc93c" Nov 1 00:23:38.187587 containerd[1985]: 2025-11-01 00:23:38.014 [INFO][4877] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" iface="eth0" netns="/var/run/netns/cni-8fba24ca-58e4-a538-9d53-4a6b643cc93c" Nov 1 00:23:38.187587 containerd[1985]: 2025-11-01 00:23:38.014 [INFO][4877] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" Nov 1 00:23:38.187587 containerd[1985]: 2025-11-01 00:23:38.014 [INFO][4877] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" Nov 1 00:23:38.187587 containerd[1985]: 2025-11-01 00:23:38.135 [INFO][4912] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" HandleID="k8s-pod-network.57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" Workload="ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-eth0" Nov 1 00:23:38.187587 containerd[1985]: 2025-11-01 00:23:38.141 [INFO][4912] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:38.187587 containerd[1985]: 2025-11-01 00:23:38.141 [INFO][4912] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:38.187587 containerd[1985]: 2025-11-01 00:23:38.167 [WARNING][4912] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" HandleID="k8s-pod-network.57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" Workload="ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-eth0" Nov 1 00:23:38.187587 containerd[1985]: 2025-11-01 00:23:38.167 [INFO][4912] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" HandleID="k8s-pod-network.57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" Workload="ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-eth0" Nov 1 00:23:38.187587 containerd[1985]: 2025-11-01 00:23:38.174 [INFO][4912] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:38.187587 containerd[1985]: 2025-11-01 00:23:38.184 [INFO][4877] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" Nov 1 00:23:38.189313 containerd[1985]: time="2025-11-01T00:23:38.188553344Z" level=info msg="TearDown network for sandbox \"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530\" successfully" Nov 1 00:23:38.189313 containerd[1985]: time="2025-11-01T00:23:38.188590641Z" level=info msg="StopPodSandbox for \"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530\" returns successfully" Nov 1 00:23:38.195274 containerd[1985]: time="2025-11-01T00:23:38.194951469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-f2crj,Uid:8eba1079-36a0-4f1b-a35a-7ac8d14e183b,Namespace:kube-system,Attempt:1,}" Nov 1 00:23:38.210351 containerd[1985]: time="2025-11-01T00:23:38.210193888Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:38.212594 containerd[1985]: time="2025-11-01T00:23:38.212247050Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:23:38.212594 containerd[1985]: time="2025-11-01T00:23:38.212539639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:38.213326 kubelet[3182]: E1101 00:23:38.213119 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:38.213326 kubelet[3182]: E1101 00:23:38.213171 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:38.213326 kubelet[3182]: E1101 00:23:38.213253 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-85c56f6579-hjmzt_calico-system(3b1a064e-eaea-4078-a670-51fea2063bf7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:38.213894 kubelet[3182]: E1101 00:23:38.213627 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85c56f6579-hjmzt" podUID="3b1a064e-eaea-4078-a670-51fea2063bf7" Nov 1 00:23:38.223254 kubelet[3182]: I1101 00:23:38.222612 3182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-cdpgq" podStartSLOduration=42.222590777 podStartE2EDuration="42.222590777s" podCreationTimestamp="2025-11-01 00:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:23:38.156530884 +0000 UTC m=+47.717698819" watchObservedRunningTime="2025-11-01 00:23:38.222590777 +0000 UTC m=+47.783758718" Nov 1 00:23:38.242593 containerd[1985]: 2025-11-01 00:23:38.037 [INFO][4882] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" Nov 1 00:23:38.242593 containerd[1985]: 2025-11-01 00:23:38.039 [INFO][4882] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" iface="eth0" netns="/var/run/netns/cni-f9d84012-3d6e-8b31-4687-c25fe214f399" Nov 1 00:23:38.242593 containerd[1985]: 2025-11-01 00:23:38.040 [INFO][4882] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" iface="eth0" netns="/var/run/netns/cni-f9d84012-3d6e-8b31-4687-c25fe214f399" Nov 1 00:23:38.242593 containerd[1985]: 2025-11-01 00:23:38.041 [INFO][4882] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" iface="eth0" netns="/var/run/netns/cni-f9d84012-3d6e-8b31-4687-c25fe214f399" Nov 1 00:23:38.242593 containerd[1985]: 2025-11-01 00:23:38.041 [INFO][4882] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" Nov 1 00:23:38.242593 containerd[1985]: 2025-11-01 00:23:38.042 [INFO][4882] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" Nov 1 00:23:38.242593 containerd[1985]: 2025-11-01 00:23:38.150 [INFO][4917] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" HandleID="k8s-pod-network.86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" Workload="ip--172--31--30--202-k8s-csi--node--driver--5cfdt-eth0" Nov 1 00:23:38.242593 containerd[1985]: 2025-11-01 00:23:38.150 [INFO][4917] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:38.242593 containerd[1985]: 2025-11-01 00:23:38.174 [INFO][4917] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:38.242593 containerd[1985]: 2025-11-01 00:23:38.210 [WARNING][4917] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" HandleID="k8s-pod-network.86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" Workload="ip--172--31--30--202-k8s-csi--node--driver--5cfdt-eth0" Nov 1 00:23:38.242593 containerd[1985]: 2025-11-01 00:23:38.210 [INFO][4917] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" HandleID="k8s-pod-network.86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" Workload="ip--172--31--30--202-k8s-csi--node--driver--5cfdt-eth0" Nov 1 00:23:38.242593 containerd[1985]: 2025-11-01 00:23:38.224 [INFO][4917] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:38.242593 containerd[1985]: 2025-11-01 00:23:38.229 [INFO][4882] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" Nov 1 00:23:38.243407 containerd[1985]: time="2025-11-01T00:23:38.243339690Z" level=info msg="TearDown network for sandbox \"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8\" successfully" Nov 1 00:23:38.243407 containerd[1985]: time="2025-11-01T00:23:38.243375657Z" level=info msg="StopPodSandbox for \"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8\" returns successfully" Nov 1 00:23:38.249051 containerd[1985]: time="2025-11-01T00:23:38.248948725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5cfdt,Uid:9d66f695-3c82-4cb4-ac8a-5f7c10006e53,Namespace:calico-system,Attempt:1,}" Nov 1 00:23:38.279709 systemd[1]: run-netns-cni\x2df9d84012\x2d3d6e\x2d8b31\x2d4687\x2dc25fe214f399.mount: Deactivated successfully. Nov 1 00:23:38.280700 systemd[1]: run-netns-cni\x2d4b9f83cd\x2db3c2\x2d8e81\x2d7142\x2deeeb1e0d4529.mount: Deactivated successfully. Nov 1 00:23:38.280809 systemd[1]: run-netns-cni\x2d8fba24ca\x2d58e4\x2da538\x2d9d53\x2d4a6b643cc93c.mount: Deactivated successfully. Nov 1 00:23:38.518575 systemd-networkd[1895]: cali253026eee07: Gained IPv6LL Nov 1 00:23:38.528478 systemd-networkd[1895]: cali1f8fdf315f0: Link UP Nov 1 00:23:38.531669 systemd-networkd[1895]: cali1f8fdf315f0: Gained carrier Nov 1 00:23:38.576908 containerd[1985]: 2025-11-01 00:23:38.198 [INFO][4924] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:38.576908 containerd[1985]: 2025-11-01 00:23:38.252 [INFO][4924] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-eth0 calico-apiserver-67d9f69bfb- calico-apiserver 29fc9071-7019-4315-907a-15289e1e3c38 972 0 2025-11-01 00:23:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67d9f69bfb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-202 calico-apiserver-67d9f69bfb-kcfrc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1f8fdf315f0 [] [] }} ContainerID="2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d" Namespace="calico-apiserver" Pod="calico-apiserver-67d9f69bfb-kcfrc" WorkloadEndpoint="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-" Nov 1 00:23:38.576908 containerd[1985]: 2025-11-01 00:23:38.252 [INFO][4924] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d" Namespace="calico-apiserver" Pod="calico-apiserver-67d9f69bfb-kcfrc" WorkloadEndpoint="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-eth0" Nov 1 00:23:38.576908 containerd[1985]: 2025-11-01 00:23:38.429 [INFO][4951] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d" HandleID="k8s-pod-network.2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d" Workload="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-eth0" Nov 1 00:23:38.576908 containerd[1985]: 2025-11-01 00:23:38.431 [INFO][4951] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d" HandleID="k8s-pod-network.2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d" Workload="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032a160), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-202", "pod":"calico-apiserver-67d9f69bfb-kcfrc", "timestamp":"2025-11-01 00:23:38.429258377 +0000 UTC"}, Hostname:"ip-172-31-30-202", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:38.576908 containerd[1985]: 2025-11-01 00:23:38.431 [INFO][4951] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:38.576908 containerd[1985]: 2025-11-01 00:23:38.431 [INFO][4951] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:38.576908 containerd[1985]: 2025-11-01 00:23:38.432 [INFO][4951] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-202' Nov 1 00:23:38.576908 containerd[1985]: 2025-11-01 00:23:38.445 [INFO][4951] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d" host="ip-172-31-30-202" Nov 1 00:23:38.576908 containerd[1985]: 2025-11-01 00:23:38.453 [INFO][4951] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-202" Nov 1 00:23:38.576908 containerd[1985]: 2025-11-01 00:23:38.464 [INFO][4951] ipam/ipam.go 511: Trying affinity for 192.168.25.0/26 host="ip-172-31-30-202" Nov 1 00:23:38.576908 containerd[1985]: 2025-11-01 00:23:38.472 [INFO][4951] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.0/26 host="ip-172-31-30-202" Nov 1 00:23:38.576908 containerd[1985]: 2025-11-01 00:23:38.479 [INFO][4951] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.0/26 host="ip-172-31-30-202" Nov 1 00:23:38.576908 containerd[1985]: 2025-11-01 00:23:38.479 [INFO][4951] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.0/26 handle="k8s-pod-network.2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d" host="ip-172-31-30-202" Nov 1 00:23:38.576908 containerd[1985]: 2025-11-01 00:23:38.481 [INFO][4951] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d Nov 1 00:23:38.576908 containerd[1985]: 2025-11-01 00:23:38.490 [INFO][4951] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.0/26 handle="k8s-pod-network.2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d" host="ip-172-31-30-202" Nov 1 00:23:38.576908 containerd[1985]: 2025-11-01 00:23:38.507 [INFO][4951] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.4/26] block=192.168.25.0/26 handle="k8s-pod-network.2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d" host="ip-172-31-30-202" Nov 1 00:23:38.576908 containerd[1985]: 2025-11-01 00:23:38.507 [INFO][4951] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.4/26] handle="k8s-pod-network.2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d" host="ip-172-31-30-202" Nov 1 00:23:38.576908 containerd[1985]: 2025-11-01 00:23:38.507 [INFO][4951] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:38.576908 containerd[1985]: 2025-11-01 00:23:38.507 [INFO][4951] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.4/26] IPv6=[] ContainerID="2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d" HandleID="k8s-pod-network.2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d" Workload="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-eth0" Nov 1 00:23:38.579998 containerd[1985]: 2025-11-01 00:23:38.519 [INFO][4924] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d" Namespace="calico-apiserver" Pod="calico-apiserver-67d9f69bfb-kcfrc" WorkloadEndpoint="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-eth0", GenerateName:"calico-apiserver-67d9f69bfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"29fc9071-7019-4315-907a-15289e1e3c38", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67d9f69bfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"", Pod:"calico-apiserver-67d9f69bfb-kcfrc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f8fdf315f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:38.579998 containerd[1985]: 2025-11-01 00:23:38.519 [INFO][4924] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.4/32] ContainerID="2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d" Namespace="calico-apiserver" Pod="calico-apiserver-67d9f69bfb-kcfrc" WorkloadEndpoint="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-eth0" Nov 1 00:23:38.579998 containerd[1985]: 2025-11-01 00:23:38.519 [INFO][4924] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f8fdf315f0 ContainerID="2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d" Namespace="calico-apiserver" Pod="calico-apiserver-67d9f69bfb-kcfrc" WorkloadEndpoint="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-eth0" Nov 1 00:23:38.579998 containerd[1985]: 2025-11-01 00:23:38.534 [INFO][4924] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d" Namespace="calico-apiserver" Pod="calico-apiserver-67d9f69bfb-kcfrc" WorkloadEndpoint="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-eth0" Nov 1 00:23:38.579998 containerd[1985]: 2025-11-01 00:23:38.536 [INFO][4924] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d" Namespace="calico-apiserver" Pod="calico-apiserver-67d9f69bfb-kcfrc" WorkloadEndpoint="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-eth0", GenerateName:"calico-apiserver-67d9f69bfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"29fc9071-7019-4315-907a-15289e1e3c38", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67d9f69bfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d", Pod:"calico-apiserver-67d9f69bfb-kcfrc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f8fdf315f0", MAC:"82:9a:13:62:fe:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:38.579998 containerd[1985]: 2025-11-01 00:23:38.570 [INFO][4924] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d" Namespace="calico-apiserver" Pod="calico-apiserver-67d9f69bfb-kcfrc" WorkloadEndpoint="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-eth0" Nov 1 00:23:38.686329 containerd[1985]: time="2025-11-01T00:23:38.685867923Z" level=info msg="StopPodSandbox for \"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116\"" Nov 1 00:23:38.697049 systemd-networkd[1895]: calif0550e81e72: Link UP Nov 1 00:23:38.708114 systemd-networkd[1895]: calif0550e81e72: Gained carrier Nov 1 00:23:38.737815 containerd[1985]: time="2025-11-01T00:23:38.736060580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:38.737815 containerd[1985]: time="2025-11-01T00:23:38.736145453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:38.737815 containerd[1985]: time="2025-11-01T00:23:38.736169444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:38.737815 containerd[1985]: time="2025-11-01T00:23:38.736282120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:38.771958 systemd-networkd[1895]: cali168c71da916: Gained IPv6LL Nov 1 00:23:38.787331 containerd[1985]: 2025-11-01 00:23:38.308 [INFO][4941] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:38.787331 containerd[1985]: 2025-11-01 00:23:38.357 [INFO][4941] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-eth0 coredns-66bc5c9577- kube-system 8eba1079-36a0-4f1b-a35a-7ac8d14e183b 975 0 2025-11-01 00:22:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-202 coredns-66bc5c9577-f2crj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif0550e81e72 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13" Namespace="kube-system" Pod="coredns-66bc5c9577-f2crj" WorkloadEndpoint="ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-" Nov 1 00:23:38.787331 containerd[1985]: 2025-11-01 00:23:38.358 [INFO][4941] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13" Namespace="kube-system" Pod="coredns-66bc5c9577-f2crj" WorkloadEndpoint="ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-eth0" Nov 1 00:23:38.787331 containerd[1985]: 2025-11-01 00:23:38.513 [INFO][4969] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13" HandleID="k8s-pod-network.916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13" Workload="ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-eth0" Nov 1 00:23:38.787331 containerd[1985]: 2025-11-01 00:23:38.513 [INFO][4969] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13" HandleID="k8s-pod-network.916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13" Workload="ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003784c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-202", "pod":"coredns-66bc5c9577-f2crj", "timestamp":"2025-11-01 00:23:38.513094842 +0000 UTC"}, Hostname:"ip-172-31-30-202", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:38.787331 containerd[1985]: 2025-11-01 00:23:38.513 [INFO][4969] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:38.787331 containerd[1985]: 2025-11-01 00:23:38.513 [INFO][4969] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:38.787331 containerd[1985]: 2025-11-01 00:23:38.513 [INFO][4969] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-202' Nov 1 00:23:38.787331 containerd[1985]: 2025-11-01 00:23:38.550 [INFO][4969] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13" host="ip-172-31-30-202" Nov 1 00:23:38.787331 containerd[1985]: 2025-11-01 00:23:38.585 [INFO][4969] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-202" Nov 1 00:23:38.787331 containerd[1985]: 2025-11-01 00:23:38.598 [INFO][4969] ipam/ipam.go 511: Trying affinity for 192.168.25.0/26 host="ip-172-31-30-202" Nov 1 00:23:38.787331 containerd[1985]: 2025-11-01 00:23:38.602 [INFO][4969] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.0/26 host="ip-172-31-30-202" Nov 1 00:23:38.787331 containerd[1985]: 2025-11-01 00:23:38.610 [INFO][4969] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.0/26 host="ip-172-31-30-202" Nov 1 00:23:38.787331 containerd[1985]: 2025-11-01 00:23:38.611 [INFO][4969] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.0/26 handle="k8s-pod-network.916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13" host="ip-172-31-30-202" Nov 1 00:23:38.787331 containerd[1985]: 2025-11-01 00:23:38.622 [INFO][4969] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13 Nov 1 00:23:38.787331 containerd[1985]: 2025-11-01 00:23:38.634 [INFO][4969] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.0/26 handle="k8s-pod-network.916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13" host="ip-172-31-30-202" Nov 1 00:23:38.787331 containerd[1985]: 2025-11-01 00:23:38.665 [INFO][4969] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.5/26] block=192.168.25.0/26 handle="k8s-pod-network.916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13" host="ip-172-31-30-202" Nov 1 00:23:38.787331 containerd[1985]: 2025-11-01 00:23:38.667 [INFO][4969] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.5/26] handle="k8s-pod-network.916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13" host="ip-172-31-30-202" Nov 1 00:23:38.787331 containerd[1985]: 2025-11-01 00:23:38.667 [INFO][4969] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:38.787331 containerd[1985]: 2025-11-01 00:23:38.667 [INFO][4969] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.5/26] IPv6=[] ContainerID="916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13" HandleID="k8s-pod-network.916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13" Workload="ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-eth0" Nov 1 00:23:38.790698 containerd[1985]: 2025-11-01 00:23:38.675 [INFO][4941] cni-plugin/k8s.go 418: Populated endpoint ContainerID="916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13" Namespace="kube-system" Pod="coredns-66bc5c9577-f2crj" WorkloadEndpoint="ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8eba1079-36a0-4f1b-a35a-7ac8d14e183b", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"", Pod:"coredns-66bc5c9577-f2crj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0550e81e72", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:38.790698 containerd[1985]: 2025-11-01 00:23:38.675 [INFO][4941] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.5/32] ContainerID="916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13" Namespace="kube-system" Pod="coredns-66bc5c9577-f2crj" WorkloadEndpoint="ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-eth0" Nov 1 00:23:38.790698 containerd[1985]: 2025-11-01 00:23:38.676 [INFO][4941] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0550e81e72 ContainerID="916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13" Namespace="kube-system" Pod="coredns-66bc5c9577-f2crj" WorkloadEndpoint="ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-eth0" Nov 1 00:23:38.790698 containerd[1985]: 2025-11-01 00:23:38.709 [INFO][4941] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13" Namespace="kube-system" Pod="coredns-66bc5c9577-f2crj" WorkloadEndpoint="ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-eth0" Nov 1 00:23:38.790698 containerd[1985]: 2025-11-01 00:23:38.711 [INFO][4941] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13" Namespace="kube-system" Pod="coredns-66bc5c9577-f2crj" WorkloadEndpoint="ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8eba1079-36a0-4f1b-a35a-7ac8d14e183b", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13", Pod:"coredns-66bc5c9577-f2crj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0550e81e72", MAC:"0e:1d:86:c0:07:51", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:38.790698 containerd[1985]: 2025-11-01 00:23:38.747 [INFO][4941] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13" Namespace="kube-system" Pod="coredns-66bc5c9577-f2crj" WorkloadEndpoint="ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-eth0" Nov 1 00:23:38.807994 systemd[1]: Started cri-containerd-2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d.scope - libcontainer container 2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d. Nov 1 00:23:38.901393 systemd-networkd[1895]: cali71ed63bdbdd: Link UP Nov 1 00:23:38.921934 systemd-networkd[1895]: cali71ed63bdbdd: Gained carrier Nov 1 00:23:38.924530 containerd[1985]: time="2025-11-01T00:23:38.908585362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:38.924530 containerd[1985]: time="2025-11-01T00:23:38.912099658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:38.924530 containerd[1985]: time="2025-11-01T00:23:38.912132933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:38.924530 containerd[1985]: time="2025-11-01T00:23:38.912249142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:38.966792 containerd[1985]: 2025-11-01 00:23:38.466 [INFO][4957] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:38.966792 containerd[1985]: 2025-11-01 00:23:38.520 [INFO][4957] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--202-k8s-csi--node--driver--5cfdt-eth0 csi-node-driver- calico-system 9d66f695-3c82-4cb4-ac8a-5f7c10006e53 976 0 2025-11-01 00:23:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-30-202 csi-node-driver-5cfdt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali71ed63bdbdd [] [] }} ContainerID="bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6" Namespace="calico-system" Pod="csi-node-driver-5cfdt" WorkloadEndpoint="ip--172--31--30--202-k8s-csi--node--driver--5cfdt-" Nov 1 00:23:38.966792 containerd[1985]: 2025-11-01 00:23:38.520 [INFO][4957] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6" Namespace="calico-system" Pod="csi-node-driver-5cfdt" WorkloadEndpoint="ip--172--31--30--202-k8s-csi--node--driver--5cfdt-eth0" Nov 1 00:23:38.966792 containerd[1985]: 2025-11-01 00:23:38.619 [INFO][4982] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6" HandleID="k8s-pod-network.bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6" Workload="ip--172--31--30--202-k8s-csi--node--driver--5cfdt-eth0" Nov 1 00:23:38.966792 containerd[1985]: 2025-11-01 00:23:38.620 [INFO][4982] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6" HandleID="k8s-pod-network.bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6" Workload="ip--172--31--30--202-k8s-csi--node--driver--5cfdt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-202", "pod":"csi-node-driver-5cfdt", "timestamp":"2025-11-01 00:23:38.619004981 +0000 UTC"}, Hostname:"ip-172-31-30-202", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:38.966792 containerd[1985]: 2025-11-01 00:23:38.620 [INFO][4982] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:38.966792 containerd[1985]: 2025-11-01 00:23:38.667 [INFO][4982] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:38.966792 containerd[1985]: 2025-11-01 00:23:38.667 [INFO][4982] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-202' Nov 1 00:23:38.966792 containerd[1985]: 2025-11-01 00:23:38.682 [INFO][4982] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6" host="ip-172-31-30-202" Nov 1 00:23:38.966792 containerd[1985]: 2025-11-01 00:23:38.785 [INFO][4982] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-202" Nov 1 00:23:38.966792 containerd[1985]: 2025-11-01 00:23:38.816 [INFO][4982] ipam/ipam.go 511: Trying affinity for 192.168.25.0/26 host="ip-172-31-30-202" Nov 1 00:23:38.966792 containerd[1985]: 2025-11-01 00:23:38.821 [INFO][4982] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.0/26 host="ip-172-31-30-202" Nov 1 00:23:38.966792 containerd[1985]: 2025-11-01 00:23:38.827 [INFO][4982] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.0/26 host="ip-172-31-30-202" Nov 1 00:23:38.966792 containerd[1985]: 2025-11-01 00:23:38.827 [INFO][4982] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.0/26 handle="k8s-pod-network.bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6" host="ip-172-31-30-202" Nov 1 00:23:38.966792 containerd[1985]: 2025-11-01 00:23:38.831 [INFO][4982] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6 Nov 1 00:23:38.966792 containerd[1985]: 2025-11-01 00:23:38.862 [INFO][4982] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.0/26 handle="k8s-pod-network.bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6" host="ip-172-31-30-202" Nov 1 00:23:38.966792 containerd[1985]: 2025-11-01 00:23:38.879 [INFO][4982] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.6/26] block=192.168.25.0/26 handle="k8s-pod-network.bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6" host="ip-172-31-30-202" Nov 1 00:23:38.966792 containerd[1985]: 2025-11-01 00:23:38.879 [INFO][4982] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.6/26] handle="k8s-pod-network.bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6" host="ip-172-31-30-202" Nov 1 00:23:38.966792 containerd[1985]: 2025-11-01 00:23:38.880 [INFO][4982] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:38.966792 containerd[1985]: 2025-11-01 00:23:38.881 [INFO][4982] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.6/26] IPv6=[] ContainerID="bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6" HandleID="k8s-pod-network.bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6" Workload="ip--172--31--30--202-k8s-csi--node--driver--5cfdt-eth0" Nov 1 00:23:38.970109 containerd[1985]: 2025-11-01 00:23:38.889 [INFO][4957] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6" Namespace="calico-system" Pod="csi-node-driver-5cfdt" WorkloadEndpoint="ip--172--31--30--202-k8s-csi--node--driver--5cfdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-csi--node--driver--5cfdt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9d66f695-3c82-4cb4-ac8a-5f7c10006e53", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"", Pod:"csi-node-driver-5cfdt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.25.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali71ed63bdbdd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:38.970109 containerd[1985]: 2025-11-01 00:23:38.890 [INFO][4957] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.6/32] ContainerID="bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6" Namespace="calico-system" Pod="csi-node-driver-5cfdt" WorkloadEndpoint="ip--172--31--30--202-k8s-csi--node--driver--5cfdt-eth0" Nov 1 00:23:38.970109 containerd[1985]: 2025-11-01 00:23:38.890 [INFO][4957] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali71ed63bdbdd ContainerID="bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6" Namespace="calico-system" Pod="csi-node-driver-5cfdt" WorkloadEndpoint="ip--172--31--30--202-k8s-csi--node--driver--5cfdt-eth0" Nov 1 00:23:38.970109 containerd[1985]: 2025-11-01 00:23:38.913 [INFO][4957] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6" Namespace="calico-system" Pod="csi-node-driver-5cfdt" WorkloadEndpoint="ip--172--31--30--202-k8s-csi--node--driver--5cfdt-eth0" Nov 1 00:23:38.970109 containerd[1985]: 2025-11-01 00:23:38.916 [INFO][4957] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6" Namespace="calico-system" Pod="csi-node-driver-5cfdt" WorkloadEndpoint="ip--172--31--30--202-k8s-csi--node--driver--5cfdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-csi--node--driver--5cfdt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9d66f695-3c82-4cb4-ac8a-5f7c10006e53", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6", Pod:"csi-node-driver-5cfdt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.25.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali71ed63bdbdd", MAC:"e2:a7:f6:ca:30:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:38.970109 containerd[1985]: 2025-11-01 00:23:38.955 [INFO][4957] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6" Namespace="calico-system" Pod="csi-node-driver-5cfdt" WorkloadEndpoint="ip--172--31--30--202-k8s-csi--node--driver--5cfdt-eth0" Nov 1 00:23:39.007364 systemd[1]: Started cri-containerd-916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13.scope - libcontainer container 916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13. Nov 1 00:23:39.087832 containerd[1985]: time="2025-11-01T00:23:39.087594258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:39.089782 containerd[1985]: time="2025-11-01T00:23:39.089456823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:39.089782 containerd[1985]: time="2025-11-01T00:23:39.089495678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:39.090523 containerd[1985]: time="2025-11-01T00:23:39.090454410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:39.130977 systemd[1]: Started cri-containerd-bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6.scope - libcontainer container bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6. Nov 1 00:23:39.145255 kubelet[3182]: E1101 00:23:39.145024 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85c56f6579-hjmzt" podUID="3b1a064e-eaea-4078-a670-51fea2063bf7" Nov 1 00:23:39.205486 containerd[1985]: 2025-11-01 00:23:39.012 [INFO][5036] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" Nov 1 00:23:39.205486 containerd[1985]: 2025-11-01 00:23:39.014 [INFO][5036] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" iface="eth0" netns="/var/run/netns/cni-5a323161-3145-21f2-43dd-2b64971a5240" Nov 1 00:23:39.205486 containerd[1985]: 2025-11-01 00:23:39.015 [INFO][5036] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" iface="eth0" netns="/var/run/netns/cni-5a323161-3145-21f2-43dd-2b64971a5240" Nov 1 00:23:39.205486 containerd[1985]: 2025-11-01 00:23:39.016 [INFO][5036] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" iface="eth0" netns="/var/run/netns/cni-5a323161-3145-21f2-43dd-2b64971a5240" Nov 1 00:23:39.205486 containerd[1985]: 2025-11-01 00:23:39.017 [INFO][5036] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" Nov 1 00:23:39.205486 containerd[1985]: 2025-11-01 00:23:39.017 [INFO][5036] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" Nov 1 00:23:39.205486 containerd[1985]: 2025-11-01 00:23:39.122 [INFO][5093] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" HandleID="k8s-pod-network.9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" Workload="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-eth0" Nov 1 00:23:39.205486 containerd[1985]: 2025-11-01 00:23:39.123 [INFO][5093] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:39.205486 containerd[1985]: 2025-11-01 00:23:39.123 [INFO][5093] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:39.205486 containerd[1985]: 2025-11-01 00:23:39.181 [WARNING][5093] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" HandleID="k8s-pod-network.9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" Workload="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-eth0" Nov 1 00:23:39.205486 containerd[1985]: 2025-11-01 00:23:39.181 [INFO][5093] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" HandleID="k8s-pod-network.9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" Workload="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-eth0" Nov 1 00:23:39.205486 containerd[1985]: 2025-11-01 00:23:39.194 [INFO][5093] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:39.205486 containerd[1985]: 2025-11-01 00:23:39.202 [INFO][5036] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" Nov 1 00:23:39.211314 containerd[1985]: time="2025-11-01T00:23:39.205975432Z" level=info msg="TearDown network for sandbox \"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116\" successfully" Nov 1 00:23:39.211314 containerd[1985]: time="2025-11-01T00:23:39.206012196Z" level=info msg="StopPodSandbox for \"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116\" returns successfully" Nov 1 00:23:39.221464 containerd[1985]: time="2025-11-01T00:23:39.220413182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67d9f69bfb-mczl8,Uid:a4244289-0ea7-4d4f-a667-210bd4cdc63c,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:23:39.230581 containerd[1985]: time="2025-11-01T00:23:39.230005421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67d9f69bfb-kcfrc,Uid:29fc9071-7019-4315-907a-15289e1e3c38,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d\"" Nov 1 00:23:39.243571 containerd[1985]: time="2025-11-01T00:23:39.242769918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:39.263401 containerd[1985]: time="2025-11-01T00:23:39.262471211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-f2crj,Uid:8eba1079-36a0-4f1b-a35a-7ac8d14e183b,Namespace:kube-system,Attempt:1,} returns sandbox id \"916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13\"" Nov 1 00:23:39.275622 systemd[1]: run-netns-cni\x2d5a323161\x2d3145\x2d21f2\x2d43dd\x2d2b64971a5240.mount: Deactivated successfully. Nov 1 00:23:39.285303 containerd[1985]: time="2025-11-01T00:23:39.285260148Z" level=info msg="CreateContainer within sandbox \"916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:23:39.331251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3901022120.mount: Deactivated successfully. Nov 1 00:23:39.332293 containerd[1985]: time="2025-11-01T00:23:39.332194894Z" level=info msg="CreateContainer within sandbox \"916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a54905f602867abb539a4880f236ac031934b7dfee8bb64665f7f976c0e8df24\"" Nov 1 00:23:39.334534 containerd[1985]: time="2025-11-01T00:23:39.334264381Z" level=info msg="StartContainer for \"a54905f602867abb539a4880f236ac031934b7dfee8bb64665f7f976c0e8df24\"" Nov 1 00:23:39.374308 containerd[1985]: time="2025-11-01T00:23:39.374076723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5cfdt,Uid:9d66f695-3c82-4cb4-ac8a-5f7c10006e53,Namespace:calico-system,Attempt:1,} returns sandbox id \"bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6\"" Nov 1 00:23:39.419427 systemd[1]: Started cri-containerd-a54905f602867abb539a4880f236ac031934b7dfee8bb64665f7f976c0e8df24.scope - libcontainer container a54905f602867abb539a4880f236ac031934b7dfee8bb64665f7f976c0e8df24. Nov 1 00:23:39.481985 containerd[1985]: time="2025-11-01T00:23:39.481927034Z" level=info msg="StartContainer for \"a54905f602867abb539a4880f236ac031934b7dfee8bb64665f7f976c0e8df24\" returns successfully" Nov 1 00:23:39.522713 containerd[1985]: time="2025-11-01T00:23:39.520548210Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:39.522713 containerd[1985]: time="2025-11-01T00:23:39.522554246Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:39.522927 containerd[1985]: time="2025-11-01T00:23:39.522584315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:39.523998 kubelet[3182]: E1101 00:23:39.523950 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:39.524118 kubelet[3182]: E1101 00:23:39.524011 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:39.524647 kubelet[3182]: E1101 00:23:39.524281 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67d9f69bfb-kcfrc_calico-apiserver(29fc9071-7019-4315-907a-15289e1e3c38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:39.524647 kubelet[3182]: E1101 00:23:39.524327 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-kcfrc" podUID="29fc9071-7019-4315-907a-15289e1e3c38" Nov 1 00:23:39.525090 containerd[1985]: time="2025-11-01T00:23:39.524849959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:23:39.582409 systemd-networkd[1895]: cali0f78488279b: Link UP Nov 1 00:23:39.586272 systemd-networkd[1895]: cali0f78488279b: Gained carrier Nov 1 00:23:39.608898 containerd[1985]: 2025-11-01 00:23:39.363 [INFO][5165] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:39.608898 containerd[1985]: 2025-11-01 00:23:39.414 [INFO][5165] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-eth0 calico-apiserver-67d9f69bfb- calico-apiserver a4244289-0ea7-4d4f-a667-210bd4cdc63c 999 0 2025-11-01 00:23:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67d9f69bfb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-202 calico-apiserver-67d9f69bfb-mczl8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0f78488279b [] [] }} ContainerID="17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338" Namespace="calico-apiserver" Pod="calico-apiserver-67d9f69bfb-mczl8" WorkloadEndpoint="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-" Nov 1 00:23:39.608898 containerd[1985]: 2025-11-01 00:23:39.414 [INFO][5165] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338" Namespace="calico-apiserver" Pod="calico-apiserver-67d9f69bfb-mczl8" WorkloadEndpoint="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-eth0" Nov 1 00:23:39.608898 containerd[1985]: 2025-11-01 00:23:39.492 [INFO][5218] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338" HandleID="k8s-pod-network.17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338" Workload="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-eth0" Nov 1 00:23:39.608898 containerd[1985]: 2025-11-01 00:23:39.493 [INFO][5218] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338" HandleID="k8s-pod-network.17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338" Workload="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333830), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-202", "pod":"calico-apiserver-67d9f69bfb-mczl8", "timestamp":"2025-11-01 00:23:39.492936773 +0000 UTC"}, Hostname:"ip-172-31-30-202", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:39.608898 containerd[1985]: 2025-11-01 00:23:39.493 [INFO][5218] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:39.608898 containerd[1985]: 2025-11-01 00:23:39.493 [INFO][5218] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:39.608898 containerd[1985]: 2025-11-01 00:23:39.493 [INFO][5218] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-202' Nov 1 00:23:39.608898 containerd[1985]: 2025-11-01 00:23:39.509 [INFO][5218] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338" host="ip-172-31-30-202" Nov 1 00:23:39.608898 containerd[1985]: 2025-11-01 00:23:39.515 [INFO][5218] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-202" Nov 1 00:23:39.608898 containerd[1985]: 2025-11-01 00:23:39.536 [INFO][5218] ipam/ipam.go 511: Trying affinity for 192.168.25.0/26 host="ip-172-31-30-202" Nov 1 00:23:39.608898 containerd[1985]: 2025-11-01 00:23:39.539 [INFO][5218] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.0/26 host="ip-172-31-30-202" Nov 1 00:23:39.608898 containerd[1985]: 2025-11-01 00:23:39.544 [INFO][5218] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.0/26 host="ip-172-31-30-202" Nov 1 00:23:39.608898 containerd[1985]: 2025-11-01 00:23:39.545 [INFO][5218] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.0/26 handle="k8s-pod-network.17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338" host="ip-172-31-30-202" Nov 1 00:23:39.608898 containerd[1985]: 2025-11-01 00:23:39.547 [INFO][5218] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338 Nov 1 00:23:39.608898 containerd[1985]: 2025-11-01 00:23:39.557 [INFO][5218] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.0/26 handle="k8s-pod-network.17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338" host="ip-172-31-30-202" Nov 1 00:23:39.608898 containerd[1985]: 2025-11-01 00:23:39.571 [INFO][5218] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.7/26] block=192.168.25.0/26 handle="k8s-pod-network.17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338" host="ip-172-31-30-202" Nov 1 00:23:39.608898 containerd[1985]: 2025-11-01 00:23:39.572 [INFO][5218] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.7/26] handle="k8s-pod-network.17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338" host="ip-172-31-30-202" Nov 1 00:23:39.608898 containerd[1985]: 2025-11-01 00:23:39.572 [INFO][5218] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:39.608898 containerd[1985]: 2025-11-01 00:23:39.572 [INFO][5218] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.7/26] IPv6=[] ContainerID="17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338" HandleID="k8s-pod-network.17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338" Workload="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-eth0" Nov 1 00:23:39.612518 containerd[1985]: 2025-11-01 00:23:39.575 [INFO][5165] cni-plugin/k8s.go 418: Populated endpoint ContainerID="17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338" Namespace="calico-apiserver" Pod="calico-apiserver-67d9f69bfb-mczl8" WorkloadEndpoint="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-eth0", GenerateName:"calico-apiserver-67d9f69bfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4244289-0ea7-4d4f-a667-210bd4cdc63c", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67d9f69bfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"", Pod:"calico-apiserver-67d9f69bfb-mczl8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f78488279b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:39.612518 containerd[1985]: 2025-11-01 00:23:39.575 [INFO][5165] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.7/32] ContainerID="17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338" Namespace="calico-apiserver" Pod="calico-apiserver-67d9f69bfb-mczl8" WorkloadEndpoint="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-eth0" Nov 1 00:23:39.612518 containerd[1985]: 2025-11-01 00:23:39.575 [INFO][5165] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f78488279b ContainerID="17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338" Namespace="calico-apiserver" Pod="calico-apiserver-67d9f69bfb-mczl8" WorkloadEndpoint="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-eth0" Nov 1 00:23:39.612518 containerd[1985]: 2025-11-01 00:23:39.585 [INFO][5165] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338" Namespace="calico-apiserver" Pod="calico-apiserver-67d9f69bfb-mczl8" WorkloadEndpoint="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-eth0" Nov 1 00:23:39.612518 containerd[1985]: 2025-11-01 00:23:39.588 [INFO][5165] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338" Namespace="calico-apiserver" Pod="calico-apiserver-67d9f69bfb-mczl8" WorkloadEndpoint="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-eth0", GenerateName:"calico-apiserver-67d9f69bfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4244289-0ea7-4d4f-a667-210bd4cdc63c", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67d9f69bfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338", Pod:"calico-apiserver-67d9f69bfb-mczl8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f78488279b", MAC:"96:4a:03:c4:fd:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:39.612518 containerd[1985]: 2025-11-01 00:23:39.605 [INFO][5165] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338" Namespace="calico-apiserver" Pod="calico-apiserver-67d9f69bfb-mczl8" WorkloadEndpoint="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-eth0" Nov 1 00:23:39.657769 containerd[1985]: time="2025-11-01T00:23:39.655976601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:39.658395 containerd[1985]: time="2025-11-01T00:23:39.657989068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:39.658395 containerd[1985]: time="2025-11-01T00:23:39.658056843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:39.659742 containerd[1985]: time="2025-11-01T00:23:39.658346495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:39.681213 containerd[1985]: time="2025-11-01T00:23:39.680843880Z" level=info msg="StopPodSandbox for \"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384\"" Nov 1 00:23:39.685573 systemd[1]: Started cri-containerd-17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338.scope - libcontainer container 17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338. Nov 1 00:23:39.789299 containerd[1985]: time="2025-11-01T00:23:39.789145374Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:39.792152 containerd[1985]: time="2025-11-01T00:23:39.791869671Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:23:39.792152 containerd[1985]: time="2025-11-01T00:23:39.791923375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:23:39.793364 kubelet[3182]: E1101 00:23:39.792884 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:39.793364 kubelet[3182]: E1101 00:23:39.792940 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:39.793364 kubelet[3182]: E1101 00:23:39.793022 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-5cfdt_calico-system(9d66f695-3c82-4cb4-ac8a-5f7c10006e53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:39.795205 containerd[1985]: time="2025-11-01T00:23:39.794908008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:23:39.797400 systemd-networkd[1895]: cali1f8fdf315f0: Gained IPv6LL Nov 1 00:23:39.892030 containerd[1985]: 2025-11-01 00:23:39.788 [INFO][5282] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" Nov 1 00:23:39.892030 containerd[1985]: 2025-11-01 00:23:39.788 [INFO][5282] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" iface="eth0" netns="/var/run/netns/cni-c9a040cb-73ab-244e-fe06-b108ec9e98ab" Nov 1 00:23:39.892030 containerd[1985]: 2025-11-01 00:23:39.788 [INFO][5282] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" iface="eth0" netns="/var/run/netns/cni-c9a040cb-73ab-244e-fe06-b108ec9e98ab" Nov 1 00:23:39.892030 containerd[1985]: 2025-11-01 00:23:39.789 [INFO][5282] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" iface="eth0" netns="/var/run/netns/cni-c9a040cb-73ab-244e-fe06-b108ec9e98ab" Nov 1 00:23:39.892030 containerd[1985]: 2025-11-01 00:23:39.789 [INFO][5282] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" Nov 1 00:23:39.892030 containerd[1985]: 2025-11-01 00:23:39.789 [INFO][5282] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" Nov 1 00:23:39.892030 containerd[1985]: 2025-11-01 00:23:39.865 [INFO][5298] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" HandleID="k8s-pod-network.1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" Workload="ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-eth0" Nov 1 00:23:39.892030 containerd[1985]: 2025-11-01 00:23:39.865 [INFO][5298] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:39.892030 containerd[1985]: 2025-11-01 00:23:39.865 [INFO][5298] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:39.892030 containerd[1985]: 2025-11-01 00:23:39.874 [WARNING][5298] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" HandleID="k8s-pod-network.1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" Workload="ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-eth0" Nov 1 00:23:39.892030 containerd[1985]: 2025-11-01 00:23:39.874 [INFO][5298] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" HandleID="k8s-pod-network.1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" Workload="ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-eth0" Nov 1 00:23:39.892030 containerd[1985]: 2025-11-01 00:23:39.881 [INFO][5298] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:39.892030 containerd[1985]: 2025-11-01 00:23:39.885 [INFO][5282] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" Nov 1 00:23:39.897104 containerd[1985]: time="2025-11-01T00:23:39.892204450Z" level=info msg="TearDown network for sandbox \"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384\" successfully" Nov 1 00:23:39.897104 containerd[1985]: time="2025-11-01T00:23:39.892250933Z" level=info msg="StopPodSandbox for \"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384\" returns successfully" Nov 1 00:23:39.898508 containerd[1985]: time="2025-11-01T00:23:39.898335758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-qq2mr,Uid:3d0071e7-dbca-4b76-a432-c8b1bb561ab0,Namespace:calico-system,Attempt:1,}" Nov 1 00:23:39.910463 containerd[1985]: time="2025-11-01T00:23:39.909377301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67d9f69bfb-mczl8,Uid:a4244289-0ea7-4d4f-a667-210bd4cdc63c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338\"" Nov 1 00:23:40.075623 containerd[1985]: time="2025-11-01T00:23:40.075510181Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:40.079752 containerd[1985]: time="2025-11-01T00:23:40.079221244Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:23:40.079752 containerd[1985]: time="2025-11-01T00:23:40.079422197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:23:40.081421 kubelet[3182]: E1101 00:23:40.080534 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:40.081421 kubelet[3182]: E1101 00:23:40.080587 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:40.081421 kubelet[3182]: E1101 00:23:40.080772 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-5cfdt_calico-system(9d66f695-3c82-4cb4-ac8a-5f7c10006e53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:40.081690 kubelet[3182]: E1101 00:23:40.080811 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cfdt" podUID="9d66f695-3c82-4cb4-ac8a-5f7c10006e53" Nov 1 00:23:40.086893 containerd[1985]: time="2025-11-01T00:23:40.084216234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:40.086567 systemd-networkd[1895]: calid35df7cc892: Link UP Nov 1 00:23:40.088601 systemd-networkd[1895]: calid35df7cc892: Gained carrier Nov 1 00:23:40.109941 containerd[1985]: 2025-11-01 00:23:39.954 [INFO][5313] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:40.109941 containerd[1985]: 2025-11-01 00:23:39.969 [INFO][5313] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-eth0 goldmane-7c778bb748- calico-system 3d0071e7-dbca-4b76-a432-c8b1bb561ab0 1021 0 2025-11-01 00:23:12 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-30-202 goldmane-7c778bb748-qq2mr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid35df7cc892 [] [] }} ContainerID="01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242" Namespace="calico-system" Pod="goldmane-7c778bb748-qq2mr" WorkloadEndpoint="ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-" Nov 1 00:23:40.109941 containerd[1985]: 2025-11-01 00:23:39.969 [INFO][5313] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242" Namespace="calico-system" Pod="goldmane-7c778bb748-qq2mr" WorkloadEndpoint="ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-eth0" Nov 1 00:23:40.109941 containerd[1985]: 2025-11-01 00:23:40.023 [INFO][5324] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242" HandleID="k8s-pod-network.01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242" Workload="ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-eth0" Nov 1 00:23:40.109941 containerd[1985]: 2025-11-01 00:23:40.024 [INFO][5324] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242" HandleID="k8s-pod-network.01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242" Workload="ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5870), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-202", "pod":"goldmane-7c778bb748-qq2mr", "timestamp":"2025-11-01 00:23:40.023181775 +0000 UTC"}, Hostname:"ip-172-31-30-202", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:40.109941 containerd[1985]: 2025-11-01 00:23:40.024 [INFO][5324] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:40.109941 containerd[1985]: 2025-11-01 00:23:40.024 [INFO][5324] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:40.109941 containerd[1985]: 2025-11-01 00:23:40.024 [INFO][5324] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-202' Nov 1 00:23:40.109941 containerd[1985]: 2025-11-01 00:23:40.033 [INFO][5324] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242" host="ip-172-31-30-202" Nov 1 00:23:40.109941 containerd[1985]: 2025-11-01 00:23:40.039 [INFO][5324] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-202" Nov 1 00:23:40.109941 containerd[1985]: 2025-11-01 00:23:40.046 [INFO][5324] ipam/ipam.go 511: Trying affinity for 192.168.25.0/26 host="ip-172-31-30-202" Nov 1 00:23:40.109941 containerd[1985]: 2025-11-01 00:23:40.048 [INFO][5324] ipam/ipam.go 158: Attempting to load block cidr=192.168.25.0/26 host="ip-172-31-30-202" Nov 1 00:23:40.109941 containerd[1985]: 2025-11-01 00:23:40.051 [INFO][5324] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.25.0/26 host="ip-172-31-30-202" Nov 1 00:23:40.109941 containerd[1985]: 2025-11-01 00:23:40.051 [INFO][5324] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.25.0/26 handle="k8s-pod-network.01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242" host="ip-172-31-30-202" Nov 1 00:23:40.109941 containerd[1985]: 2025-11-01 00:23:40.053 [INFO][5324] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242 Nov 1 00:23:40.109941 containerd[1985]: 2025-11-01 00:23:40.059 [INFO][5324] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.25.0/26 handle="k8s-pod-network.01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242" host="ip-172-31-30-202" Nov 1 00:23:40.109941 containerd[1985]: 2025-11-01 00:23:40.077 [INFO][5324] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.25.8/26] block=192.168.25.0/26 handle="k8s-pod-network.01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242" host="ip-172-31-30-202" Nov 1 00:23:40.109941 containerd[1985]: 2025-11-01 00:23:40.077 [INFO][5324] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.25.8/26] handle="k8s-pod-network.01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242" host="ip-172-31-30-202" Nov 1 00:23:40.109941 containerd[1985]: 2025-11-01 00:23:40.077 [INFO][5324] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:40.109941 containerd[1985]: 2025-11-01 00:23:40.077 [INFO][5324] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.25.8/26] IPv6=[] ContainerID="01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242" HandleID="k8s-pod-network.01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242" Workload="ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-eth0" Nov 1 00:23:40.111372 containerd[1985]: 2025-11-01 00:23:40.080 [INFO][5313] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242" Namespace="calico-system" Pod="goldmane-7c778bb748-qq2mr" WorkloadEndpoint="ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"3d0071e7-dbca-4b76-a432-c8b1bb561ab0", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"", Pod:"goldmane-7c778bb748-qq2mr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.25.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid35df7cc892", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:40.111372 containerd[1985]: 2025-11-01 00:23:40.080 [INFO][5313] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.8/32] ContainerID="01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242" Namespace="calico-system" Pod="goldmane-7c778bb748-qq2mr" WorkloadEndpoint="ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-eth0" Nov 1 00:23:40.111372 containerd[1985]: 2025-11-01 00:23:40.080 [INFO][5313] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid35df7cc892 ContainerID="01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242" Namespace="calico-system" Pod="goldmane-7c778bb748-qq2mr" WorkloadEndpoint="ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-eth0" Nov 1 00:23:40.111372 containerd[1985]: 2025-11-01 00:23:40.085 [INFO][5313] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242" Namespace="calico-system" Pod="goldmane-7c778bb748-qq2mr" WorkloadEndpoint="ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-eth0" Nov 1 00:23:40.111372 containerd[1985]: 2025-11-01 00:23:40.086 [INFO][5313] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242" Namespace="calico-system" Pod="goldmane-7c778bb748-qq2mr" WorkloadEndpoint="ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"3d0071e7-dbca-4b76-a432-c8b1bb561ab0", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242", Pod:"goldmane-7c778bb748-qq2mr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.25.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid35df7cc892", MAC:"56:06:b7:38:c0:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:40.111372 containerd[1985]: 2025-11-01 00:23:40.105 [INFO][5313] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242" Namespace="calico-system" Pod="goldmane-7c778bb748-qq2mr" WorkloadEndpoint="ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-eth0" Nov 1 00:23:40.116480 systemd-networkd[1895]: calif0550e81e72: Gained IPv6LL Nov 1 00:23:40.135099 containerd[1985]: time="2025-11-01T00:23:40.134635733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:40.135099 containerd[1985]: time="2025-11-01T00:23:40.134944442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:40.135506 containerd[1985]: time="2025-11-01T00:23:40.135279315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:40.135985 containerd[1985]: time="2025-11-01T00:23:40.135865938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:40.156343 kubelet[3182]: E1101 00:23:40.155804 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-kcfrc" podUID="29fc9071-7019-4315-907a-15289e1e3c38" Nov 1 00:23:40.170555 systemd[1]: Started cri-containerd-01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242.scope - libcontainer container 01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242. Nov 1 00:23:40.201796 kubelet[3182]: E1101 00:23:40.201425 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cfdt" podUID="9d66f695-3c82-4cb4-ac8a-5f7c10006e53" Nov 1 00:23:40.251755 kubelet[3182]: I1101 00:23:40.250488 3182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-f2crj" podStartSLOduration=44.250462816 podStartE2EDuration="44.250462816s" podCreationTimestamp="2025-11-01 00:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:23:40.222281648 +0000 UTC m=+49.783449581" watchObservedRunningTime="2025-11-01 00:23:40.250462816 +0000 UTC m=+49.811630751" Nov 1 00:23:40.272670 systemd[1]: run-netns-cni\x2dc9a040cb\x2d73ab\x2d244e\x2dfe06\x2db108ec9e98ab.mount: Deactivated successfully. Nov 1 00:23:40.278400 containerd[1985]: time="2025-11-01T00:23:40.278216563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-qq2mr,Uid:3d0071e7-dbca-4b76-a432-c8b1bb561ab0,Namespace:calico-system,Attempt:1,} returns sandbox id \"01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242\"" Nov 1 00:23:40.330585 containerd[1985]: time="2025-11-01T00:23:40.330539116Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:40.333273 containerd[1985]: time="2025-11-01T00:23:40.333205312Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:40.333448 containerd[1985]: time="2025-11-01T00:23:40.333234038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:40.333585 kubelet[3182]: E1101 00:23:40.333546 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:40.333663 kubelet[3182]: E1101 00:23:40.333596 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:40.333904 kubelet[3182]: E1101 00:23:40.333872 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67d9f69bfb-mczl8_calico-apiserver(a4244289-0ea7-4d4f-a667-210bd4cdc63c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:40.333997 kubelet[3182]: E1101 00:23:40.333924 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-mczl8" podUID="a4244289-0ea7-4d4f-a667-210bd4cdc63c" Nov 1 00:23:40.334682 containerd[1985]: time="2025-11-01T00:23:40.334417583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:23:40.578119 containerd[1985]: time="2025-11-01T00:23:40.577999846Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:40.580472 containerd[1985]: time="2025-11-01T00:23:40.580397881Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:23:40.580640 containerd[1985]: time="2025-11-01T00:23:40.580539858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:40.581162 kubelet[3182]: E1101 00:23:40.580811 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:40.581162 kubelet[3182]: E1101 00:23:40.580925 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:40.582014 kubelet[3182]: E1101 00:23:40.581703 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-qq2mr_calico-system(3d0071e7-dbca-4b76-a432-c8b1bb561ab0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:40.582014 kubelet[3182]: E1101 00:23:40.581787 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qq2mr" podUID="3d0071e7-dbca-4b76-a432-c8b1bb561ab0" Nov 1 00:23:40.692312 systemd-networkd[1895]: cali71ed63bdbdd: Gained IPv6LL Nov 1 00:23:40.692702 systemd-networkd[1895]: cali0f78488279b: Gained IPv6LL Nov 1 00:23:41.198769 kubelet[3182]: E1101 00:23:41.197712 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qq2mr" podUID="3d0071e7-dbca-4b76-a432-c8b1bb561ab0" Nov 1 00:23:41.198769 kubelet[3182]: E1101 00:23:41.197892 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-mczl8" podUID="a4244289-0ea7-4d4f-a667-210bd4cdc63c" Nov 1 00:23:41.198769 kubelet[3182]: E1101 00:23:41.197975 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-kcfrc" podUID="29fc9071-7019-4315-907a-15289e1e3c38" Nov 1 00:23:41.199384 kubelet[3182]: E1101 00:23:41.198619 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cfdt" podUID="9d66f695-3c82-4cb4-ac8a-5f7c10006e53" Nov 1 00:23:41.203874 systemd-networkd[1895]: calid35df7cc892: Gained IPv6LL Nov 1 00:23:42.201233 kubelet[3182]: E1101 00:23:42.201184 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qq2mr" podUID="3d0071e7-dbca-4b76-a432-c8b1bb561ab0" Nov 1 00:23:42.928696 kubelet[3182]: I1101 00:23:42.925542 3182 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:23:43.744664 ntpd[1957]: Listen normally on 8 cali5c7bd1ccd2d [fe80::ecee:eeff:feee:eeee%4]:123 Nov 1 00:23:43.744791 ntpd[1957]: Listen normally on 9 cali253026eee07 [fe80::ecee:eeff:feee:eeee%5]:123 Nov 1 00:23:43.745358 ntpd[1957]: 1 Nov 00:23:43 ntpd[1957]: Listen normally on 8 cali5c7bd1ccd2d [fe80::ecee:eeff:feee:eeee%4]:123 Nov 1 00:23:43.745358 ntpd[1957]: 1 Nov 00:23:43 ntpd[1957]: Listen normally on 9 cali253026eee07 [fe80::ecee:eeff:feee:eeee%5]:123 Nov 1 00:23:43.745358 ntpd[1957]: 1 Nov 00:23:43 ntpd[1957]: Listen normally on 10 cali168c71da916 [fe80::ecee:eeff:feee:eeee%6]:123 Nov 1 00:23:43.745358 ntpd[1957]: 1 Nov 00:23:43 ntpd[1957]: Listen normally on 11 cali1f8fdf315f0 [fe80::ecee:eeff:feee:eeee%7]:123 Nov 1 00:23:43.745358 ntpd[1957]: 1 Nov 00:23:43 ntpd[1957]: Listen normally on 12 calif0550e81e72 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 1 00:23:43.745358 ntpd[1957]: 1 Nov 00:23:43 ntpd[1957]: Listen normally on 13 cali71ed63bdbdd [fe80::ecee:eeff:feee:eeee%9]:123 Nov 1 00:23:43.745358 ntpd[1957]: 1 Nov 00:23:43 ntpd[1957]: Listen normally on 14 cali0f78488279b [fe80::ecee:eeff:feee:eeee%10]:123 Nov 1 00:23:43.745358 ntpd[1957]: 1 Nov 00:23:43 ntpd[1957]: Listen normally on 15 calid35df7cc892 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 1 00:23:43.744846 ntpd[1957]: Listen normally on 10 cali168c71da916 [fe80::ecee:eeff:feee:eeee%6]:123 Nov 1 00:23:43.744888 ntpd[1957]: Listen normally on 11 cali1f8fdf315f0 [fe80::ecee:eeff:feee:eeee%7]:123 Nov 1 00:23:43.744929 ntpd[1957]: Listen normally on 12 calif0550e81e72 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 1 00:23:43.744968 ntpd[1957]: Listen normally on 13 cali71ed63bdbdd [fe80::ecee:eeff:feee:eeee%9]:123 Nov 1 00:23:43.745021 ntpd[1957]: Listen normally on 14 cali0f78488279b [fe80::ecee:eeff:feee:eeee%10]:123 Nov 1 00:23:43.745059 ntpd[1957]: Listen normally on 15 calid35df7cc892 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 1 00:23:44.272371 kernel: bpftool[5496]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 1 00:23:44.558145 (udev-worker)[5510]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:23:44.560913 systemd-networkd[1895]: vxlan.calico: Link UP Nov 1 00:23:44.560919 systemd-networkd[1895]: vxlan.calico: Gained carrier Nov 1 00:23:44.607650 (udev-worker)[5515]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:23:46.260954 systemd-networkd[1895]: vxlan.calico: Gained IPv6LL Nov 1 00:23:47.808162 systemd[1]: Started sshd@7-172.31.30.202:22-139.178.89.65:55610.service - OpenSSH per-connection server daemon (139.178.89.65:55610). Nov 1 00:23:48.011997 sshd[5582]: Accepted publickey for core from 139.178.89.65 port 55610 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:23:48.015829 sshd[5582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:48.022076 systemd-logind[1963]: New session 8 of user core. Nov 1 00:23:48.028968 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 00:23:48.744590 ntpd[1957]: Listen normally on 16 vxlan.calico 192.168.25.0:123 Nov 1 00:23:48.745066 ntpd[1957]: 1 Nov 00:23:48 ntpd[1957]: Listen normally on 16 vxlan.calico 192.168.25.0:123 Nov 1 00:23:48.745066 ntpd[1957]: 1 Nov 00:23:48 ntpd[1957]: Listen normally on 17 vxlan.calico [fe80::6465:2bff:fe17:f07f%12]:123 Nov 1 00:23:48.744708 ntpd[1957]: Listen normally on 17 vxlan.calico [fe80::6465:2bff:fe17:f07f%12]:123 Nov 1 00:23:48.972860 sshd[5582]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:48.977927 systemd[1]: sshd@7-172.31.30.202:22-139.178.89.65:55610.service: Deactivated successfully. Nov 1 00:23:48.980655 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:23:48.982261 systemd-logind[1963]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:23:48.983718 systemd-logind[1963]: Removed session 8. Nov 1 00:23:50.632249 containerd[1985]: time="2025-11-01T00:23:50.631955391Z" level=info msg="StopPodSandbox for \"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625\"" Nov 1 00:23:50.759272 containerd[1985]: 2025-11-01 00:23:50.690 [WARNING][5614] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-eth0", GenerateName:"calico-apiserver-67d9f69bfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"29fc9071-7019-4315-907a-15289e1e3c38", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67d9f69bfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d", Pod:"calico-apiserver-67d9f69bfb-kcfrc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f8fdf315f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:50.759272 containerd[1985]: 2025-11-01 00:23:50.691 [INFO][5614] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" Nov 1 00:23:50.759272 containerd[1985]: 2025-11-01 00:23:50.691 [INFO][5614] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" iface="eth0" netns="" Nov 1 00:23:50.759272 containerd[1985]: 2025-11-01 00:23:50.691 [INFO][5614] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" Nov 1 00:23:50.759272 containerd[1985]: 2025-11-01 00:23:50.691 [INFO][5614] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" Nov 1 00:23:50.759272 containerd[1985]: 2025-11-01 00:23:50.732 [INFO][5621] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" HandleID="k8s-pod-network.7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" Workload="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-eth0" Nov 1 00:23:50.759272 containerd[1985]: 2025-11-01 00:23:50.733 [INFO][5621] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:50.759272 containerd[1985]: 2025-11-01 00:23:50.733 [INFO][5621] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:50.759272 containerd[1985]: 2025-11-01 00:23:50.745 [WARNING][5621] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" HandleID="k8s-pod-network.7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" Workload="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-eth0" Nov 1 00:23:50.759272 containerd[1985]: 2025-11-01 00:23:50.745 [INFO][5621] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" HandleID="k8s-pod-network.7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" Workload="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-eth0" Nov 1 00:23:50.759272 containerd[1985]: 2025-11-01 00:23:50.748 [INFO][5621] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:50.759272 containerd[1985]: 2025-11-01 00:23:50.753 [INFO][5614] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" Nov 1 00:23:50.760797 containerd[1985]: time="2025-11-01T00:23:50.759325712Z" level=info msg="TearDown network for sandbox \"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625\" successfully" Nov 1 00:23:50.760797 containerd[1985]: time="2025-11-01T00:23:50.759357267Z" level=info msg="StopPodSandbox for \"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625\" returns successfully" Nov 1 00:23:50.777041 containerd[1985]: time="2025-11-01T00:23:50.776981444Z" level=info msg="RemovePodSandbox for \"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625\"" Nov 1 00:23:50.777041 containerd[1985]: time="2025-11-01T00:23:50.777041372Z" level=info msg="Forcibly stopping sandbox \"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625\"" Nov 1 00:23:50.892619 containerd[1985]: 2025-11-01 00:23:50.841 [WARNING][5637] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-eth0", GenerateName:"calico-apiserver-67d9f69bfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"29fc9071-7019-4315-907a-15289e1e3c38", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67d9f69bfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"2c988f452d506ba01bbb7665d5f205cade29f1403e0ea657789b0be8a9d50f0d", Pod:"calico-apiserver-67d9f69bfb-kcfrc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f8fdf315f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:50.892619 containerd[1985]: 2025-11-01 00:23:50.842 [INFO][5637] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" Nov 1 00:23:50.892619 containerd[1985]: 2025-11-01 00:23:50.842 [INFO][5637] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" iface="eth0" netns="" Nov 1 00:23:50.892619 containerd[1985]: 2025-11-01 00:23:50.842 [INFO][5637] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" Nov 1 00:23:50.892619 containerd[1985]: 2025-11-01 00:23:50.842 [INFO][5637] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" Nov 1 00:23:50.892619 containerd[1985]: 2025-11-01 00:23:50.878 [INFO][5644] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" HandleID="k8s-pod-network.7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" Workload="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-eth0" Nov 1 00:23:50.892619 containerd[1985]: 2025-11-01 00:23:50.878 [INFO][5644] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:50.892619 containerd[1985]: 2025-11-01 00:23:50.878 [INFO][5644] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:50.892619 containerd[1985]: 2025-11-01 00:23:50.886 [WARNING][5644] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" HandleID="k8s-pod-network.7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" Workload="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-eth0" Nov 1 00:23:50.892619 containerd[1985]: 2025-11-01 00:23:50.886 [INFO][5644] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" HandleID="k8s-pod-network.7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" Workload="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--kcfrc-eth0" Nov 1 00:23:50.892619 containerd[1985]: 2025-11-01 00:23:50.887 [INFO][5644] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:50.892619 containerd[1985]: 2025-11-01 00:23:50.890 [INFO][5637] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625" Nov 1 00:23:50.892619 containerd[1985]: time="2025-11-01T00:23:50.892535514Z" level=info msg="TearDown network for sandbox \"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625\" successfully" Nov 1 00:23:50.909345 containerd[1985]: time="2025-11-01T00:23:50.909288622Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:50.909507 containerd[1985]: time="2025-11-01T00:23:50.909371158Z" level=info msg="RemovePodSandbox \"7936817f84782882c42c34fb45c4b28184adda53cfefe2be54baaff106f94625\" returns successfully" Nov 1 00:23:50.909953 containerd[1985]: time="2025-11-01T00:23:50.909928949Z" level=info msg="StopPodSandbox for \"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9\"" Nov 1 00:23:50.988968 containerd[1985]: 2025-11-01 00:23:50.951 [WARNING][5658] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" WorkloadEndpoint="ip--172--31--30--202-k8s-whisker--6976cb7758--krmm4-eth0" Nov 1 00:23:50.988968 containerd[1985]: 2025-11-01 00:23:50.952 [INFO][5658] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" Nov 1 00:23:50.988968 containerd[1985]: 2025-11-01 00:23:50.952 [INFO][5658] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" iface="eth0" netns="" Nov 1 00:23:50.988968 containerd[1985]: 2025-11-01 00:23:50.952 [INFO][5658] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" Nov 1 00:23:50.988968 containerd[1985]: 2025-11-01 00:23:50.952 [INFO][5658] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" Nov 1 00:23:50.988968 containerd[1985]: 2025-11-01 00:23:50.975 [INFO][5666] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" HandleID="k8s-pod-network.019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" Workload="ip--172--31--30--202-k8s-whisker--6976cb7758--krmm4-eth0" Nov 1 00:23:50.988968 containerd[1985]: 2025-11-01 00:23:50.975 [INFO][5666] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:50.988968 containerd[1985]: 2025-11-01 00:23:50.975 [INFO][5666] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:50.988968 containerd[1985]: 2025-11-01 00:23:50.982 [WARNING][5666] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" HandleID="k8s-pod-network.019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" Workload="ip--172--31--30--202-k8s-whisker--6976cb7758--krmm4-eth0" Nov 1 00:23:50.988968 containerd[1985]: 2025-11-01 00:23:50.982 [INFO][5666] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" HandleID="k8s-pod-network.019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" Workload="ip--172--31--30--202-k8s-whisker--6976cb7758--krmm4-eth0" Nov 1 00:23:50.988968 containerd[1985]: 2025-11-01 00:23:50.984 [INFO][5666] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:50.988968 containerd[1985]: 2025-11-01 00:23:50.986 [INFO][5658] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" Nov 1 00:23:50.989531 containerd[1985]: time="2025-11-01T00:23:50.989007911Z" level=info msg="TearDown network for sandbox \"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9\" successfully" Nov 1 00:23:50.989531 containerd[1985]: time="2025-11-01T00:23:50.989037608Z" level=info msg="StopPodSandbox for \"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9\" returns successfully" Nov 1 00:23:50.989667 containerd[1985]: time="2025-11-01T00:23:50.989620835Z" level=info msg="RemovePodSandbox for \"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9\"" Nov 1 00:23:50.989667 containerd[1985]: time="2025-11-01T00:23:50.989660894Z" level=info msg="Forcibly stopping sandbox \"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9\"" Nov 1 00:23:51.072585 containerd[1985]: 2025-11-01 00:23:51.032 [WARNING][5680] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" WorkloadEndpoint="ip--172--31--30--202-k8s-whisker--6976cb7758--krmm4-eth0" Nov 1 00:23:51.072585 containerd[1985]: 2025-11-01 00:23:51.032 [INFO][5680] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" Nov 1 00:23:51.072585 containerd[1985]: 2025-11-01 00:23:51.032 [INFO][5680] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" iface="eth0" netns="" Nov 1 00:23:51.072585 containerd[1985]: 2025-11-01 00:23:51.032 [INFO][5680] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" Nov 1 00:23:51.072585 containerd[1985]: 2025-11-01 00:23:51.032 [INFO][5680] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" Nov 1 00:23:51.072585 containerd[1985]: 2025-11-01 00:23:51.058 [INFO][5687] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" HandleID="k8s-pod-network.019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" Workload="ip--172--31--30--202-k8s-whisker--6976cb7758--krmm4-eth0" Nov 1 00:23:51.072585 containerd[1985]: 2025-11-01 00:23:51.058 [INFO][5687] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:51.072585 containerd[1985]: 2025-11-01 00:23:51.058 [INFO][5687] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:51.072585 containerd[1985]: 2025-11-01 00:23:51.066 [WARNING][5687] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" HandleID="k8s-pod-network.019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" Workload="ip--172--31--30--202-k8s-whisker--6976cb7758--krmm4-eth0" Nov 1 00:23:51.072585 containerd[1985]: 2025-11-01 00:23:51.066 [INFO][5687] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" HandleID="k8s-pod-network.019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" Workload="ip--172--31--30--202-k8s-whisker--6976cb7758--krmm4-eth0" Nov 1 00:23:51.072585 containerd[1985]: 2025-11-01 00:23:51.068 [INFO][5687] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:51.072585 containerd[1985]: 2025-11-01 00:23:51.070 [INFO][5680] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9" Nov 1 00:23:51.073596 containerd[1985]: time="2025-11-01T00:23:51.072638348Z" level=info msg="TearDown network for sandbox \"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9\" successfully" Nov 1 00:23:51.081048 containerd[1985]: time="2025-11-01T00:23:51.080973311Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:51.081048 containerd[1985]: time="2025-11-01T00:23:51.081053173Z" level=info msg="RemovePodSandbox \"019efbf545dad93d4e553b6b9b3431fda854c257d2577da8c891957830ad20e9\" returns successfully" Nov 1 00:23:51.081620 containerd[1985]: time="2025-11-01T00:23:51.081572075Z" level=info msg="StopPodSandbox for \"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116\"" Nov 1 00:23:51.158498 containerd[1985]: 2025-11-01 00:23:51.122 [WARNING][5701] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-eth0", GenerateName:"calico-apiserver-67d9f69bfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4244289-0ea7-4d4f-a667-210bd4cdc63c", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67d9f69bfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338", Pod:"calico-apiserver-67d9f69bfb-mczl8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f78488279b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:51.158498 containerd[1985]: 2025-11-01 00:23:51.122 [INFO][5701] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" Nov 1 00:23:51.158498 containerd[1985]: 2025-11-01 00:23:51.122 [INFO][5701] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" iface="eth0" netns="" Nov 1 00:23:51.158498 containerd[1985]: 2025-11-01 00:23:51.122 [INFO][5701] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" Nov 1 00:23:51.158498 containerd[1985]: 2025-11-01 00:23:51.122 [INFO][5701] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" Nov 1 00:23:51.158498 containerd[1985]: 2025-11-01 00:23:51.145 [INFO][5708] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" HandleID="k8s-pod-network.9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" Workload="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-eth0" Nov 1 00:23:51.158498 containerd[1985]: 2025-11-01 00:23:51.145 [INFO][5708] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:51.158498 containerd[1985]: 2025-11-01 00:23:51.145 [INFO][5708] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:51.158498 containerd[1985]: 2025-11-01 00:23:51.153 [WARNING][5708] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" HandleID="k8s-pod-network.9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" Workload="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-eth0" Nov 1 00:23:51.158498 containerd[1985]: 2025-11-01 00:23:51.153 [INFO][5708] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" HandleID="k8s-pod-network.9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" Workload="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-eth0" Nov 1 00:23:51.158498 containerd[1985]: 2025-11-01 00:23:51.154 [INFO][5708] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:51.158498 containerd[1985]: 2025-11-01 00:23:51.156 [INFO][5701] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" Nov 1 00:23:51.158498 containerd[1985]: time="2025-11-01T00:23:51.158469632Z" level=info msg="TearDown network for sandbox \"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116\" successfully" Nov 1 00:23:51.158498 containerd[1985]: time="2025-11-01T00:23:51.158494936Z" level=info msg="StopPodSandbox for \"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116\" returns successfully" Nov 1 00:23:51.160345 containerd[1985]: time="2025-11-01T00:23:51.159214134Z" level=info msg="RemovePodSandbox for \"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116\"" Nov 1 00:23:51.160345 containerd[1985]: time="2025-11-01T00:23:51.159254291Z" level=info msg="Forcibly stopping sandbox \"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116\"" Nov 1 00:23:51.259552 containerd[1985]: 2025-11-01 00:23:51.215 [WARNING][5722] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-eth0", GenerateName:"calico-apiserver-67d9f69bfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4244289-0ea7-4d4f-a667-210bd4cdc63c", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67d9f69bfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"17a875865ab585d5d1fbe3645b42ee05eba4002f6ac991788ac7d52a07bde338", Pod:"calico-apiserver-67d9f69bfb-mczl8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f78488279b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:51.259552 containerd[1985]: 2025-11-01 00:23:51.216 [INFO][5722] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" Nov 1 00:23:51.259552 containerd[1985]: 2025-11-01 00:23:51.216 [INFO][5722] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" iface="eth0" netns="" Nov 1 00:23:51.259552 containerd[1985]: 2025-11-01 00:23:51.216 [INFO][5722] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" Nov 1 00:23:51.259552 containerd[1985]: 2025-11-01 00:23:51.216 [INFO][5722] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" Nov 1 00:23:51.259552 containerd[1985]: 2025-11-01 00:23:51.245 [INFO][5729] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" HandleID="k8s-pod-network.9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" Workload="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-eth0" Nov 1 00:23:51.259552 containerd[1985]: 2025-11-01 00:23:51.245 [INFO][5729] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:51.259552 containerd[1985]: 2025-11-01 00:23:51.246 [INFO][5729] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:51.259552 containerd[1985]: 2025-11-01 00:23:51.252 [WARNING][5729] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" HandleID="k8s-pod-network.9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" Workload="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-eth0" Nov 1 00:23:51.259552 containerd[1985]: 2025-11-01 00:23:51.253 [INFO][5729] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" HandleID="k8s-pod-network.9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" Workload="ip--172--31--30--202-k8s-calico--apiserver--67d9f69bfb--mczl8-eth0" Nov 1 00:23:51.259552 containerd[1985]: 2025-11-01 00:23:51.254 [INFO][5729] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:51.259552 containerd[1985]: 2025-11-01 00:23:51.256 [INFO][5722] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116" Nov 1 00:23:51.260413 containerd[1985]: time="2025-11-01T00:23:51.259617889Z" level=info msg="TearDown network for sandbox \"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116\" successfully" Nov 1 00:23:51.269771 containerd[1985]: time="2025-11-01T00:23:51.269701171Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:51.270050 containerd[1985]: time="2025-11-01T00:23:51.269801652Z" level=info msg="RemovePodSandbox \"9d6c4547a657acefba52eb68d4684fda6a1b3def2a1f3dfc6042e2a5d9065116\" returns successfully" Nov 1 00:23:51.270766 containerd[1985]: time="2025-11-01T00:23:51.270541772Z" level=info msg="StopPodSandbox for \"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530\"" Nov 1 00:23:51.357839 containerd[1985]: 2025-11-01 00:23:51.312 [WARNING][5744] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8eba1079-36a0-4f1b-a35a-7ac8d14e183b", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13", Pod:"coredns-66bc5c9577-f2crj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0550e81e72", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:51.357839 containerd[1985]: 2025-11-01 00:23:51.313 [INFO][5744] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" Nov 1 00:23:51.357839 containerd[1985]: 2025-11-01 00:23:51.313 [INFO][5744] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" iface="eth0" netns="" Nov 1 00:23:51.357839 containerd[1985]: 2025-11-01 00:23:51.313 [INFO][5744] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" Nov 1 00:23:51.357839 containerd[1985]: 2025-11-01 00:23:51.313 [INFO][5744] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" Nov 1 00:23:51.357839 containerd[1985]: 2025-11-01 00:23:51.343 [INFO][5751] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" HandleID="k8s-pod-network.57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" Workload="ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-eth0" Nov 1 00:23:51.357839 containerd[1985]: 2025-11-01 00:23:51.343 [INFO][5751] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:51.357839 containerd[1985]: 2025-11-01 00:23:51.343 [INFO][5751] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:51.357839 containerd[1985]: 2025-11-01 00:23:51.350 [WARNING][5751] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" HandleID="k8s-pod-network.57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" Workload="ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-eth0" Nov 1 00:23:51.357839 containerd[1985]: 2025-11-01 00:23:51.350 [INFO][5751] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" HandleID="k8s-pod-network.57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" Workload="ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-eth0" Nov 1 00:23:51.357839 containerd[1985]: 2025-11-01 00:23:51.354 [INFO][5751] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:51.357839 containerd[1985]: 2025-11-01 00:23:51.356 [INFO][5744] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" Nov 1 00:23:51.358424 containerd[1985]: time="2025-11-01T00:23:51.357889470Z" level=info msg="TearDown network for sandbox \"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530\" successfully" Nov 1 00:23:51.358424 containerd[1985]: time="2025-11-01T00:23:51.357920326Z" level=info msg="StopPodSandbox for \"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530\" returns successfully" Nov 1 00:23:51.358929 containerd[1985]: time="2025-11-01T00:23:51.358888832Z" level=info msg="RemovePodSandbox for \"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530\"" Nov 1 00:23:51.359014 containerd[1985]: time="2025-11-01T00:23:51.358928952Z" level=info msg="Forcibly stopping sandbox \"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530\"" Nov 1 00:23:51.441339 containerd[1985]: 2025-11-01 00:23:51.403 [WARNING][5765] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8eba1079-36a0-4f1b-a35a-7ac8d14e183b", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"916ed138c2f4925771cc0ddca4fa6ce7184db3226f8b4ca6742578357c674d13", Pod:"coredns-66bc5c9577-f2crj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0550e81e72", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:51.441339 containerd[1985]: 2025-11-01 00:23:51.403 [INFO][5765] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" Nov 1 00:23:51.441339 containerd[1985]: 2025-11-01 00:23:51.403 [INFO][5765] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" iface="eth0" netns="" Nov 1 00:23:51.441339 containerd[1985]: 2025-11-01 00:23:51.403 [INFO][5765] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" Nov 1 00:23:51.441339 containerd[1985]: 2025-11-01 00:23:51.403 [INFO][5765] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" Nov 1 00:23:51.441339 containerd[1985]: 2025-11-01 00:23:51.428 [INFO][5772] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" HandleID="k8s-pod-network.57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" Workload="ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-eth0" Nov 1 00:23:51.441339 containerd[1985]: 2025-11-01 00:23:51.428 [INFO][5772] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:51.441339 containerd[1985]: 2025-11-01 00:23:51.428 [INFO][5772] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:51.441339 containerd[1985]: 2025-11-01 00:23:51.435 [WARNING][5772] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" HandleID="k8s-pod-network.57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" Workload="ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-eth0" Nov 1 00:23:51.441339 containerd[1985]: 2025-11-01 00:23:51.435 [INFO][5772] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" HandleID="k8s-pod-network.57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" Workload="ip--172--31--30--202-k8s-coredns--66bc5c9577--f2crj-eth0" Nov 1 00:23:51.441339 containerd[1985]: 2025-11-01 00:23:51.437 [INFO][5772] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:51.441339 containerd[1985]: 2025-11-01 00:23:51.439 [INFO][5765] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530" Nov 1 00:23:51.441896 containerd[1985]: time="2025-11-01T00:23:51.441379116Z" level=info msg="TearDown network for sandbox \"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530\" successfully" Nov 1 00:23:51.450013 containerd[1985]: time="2025-11-01T00:23:51.449952964Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:51.450013 containerd[1985]: time="2025-11-01T00:23:51.450016372Z" level=info msg="RemovePodSandbox \"57f4b67be4fe180fa79b7470f130307809b890cc44ea86baa9819a60b1c36530\" returns successfully" Nov 1 00:23:51.450514 containerd[1985]: time="2025-11-01T00:23:51.450467540Z" level=info msg="StopPodSandbox for \"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806\"" Nov 1 00:23:51.532097 containerd[1985]: 2025-11-01 00:23:51.493 [WARNING][5787] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ccf197b2-b2cc-466e-947f-e45189c998df", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1", Pod:"coredns-66bc5c9577-cdpgq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali168c71da916", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:51.532097 containerd[1985]: 2025-11-01 00:23:51.494 [INFO][5787] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" Nov 1 00:23:51.532097 containerd[1985]: 2025-11-01 00:23:51.494 [INFO][5787] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" iface="eth0" netns="" Nov 1 00:23:51.532097 containerd[1985]: 2025-11-01 00:23:51.494 [INFO][5787] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" Nov 1 00:23:51.532097 containerd[1985]: 2025-11-01 00:23:51.494 [INFO][5787] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" Nov 1 00:23:51.532097 containerd[1985]: 2025-11-01 00:23:51.518 [INFO][5794] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" HandleID="k8s-pod-network.1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" Workload="ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-eth0" Nov 1 00:23:51.532097 containerd[1985]: 2025-11-01 00:23:51.519 [INFO][5794] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:51.532097 containerd[1985]: 2025-11-01 00:23:51.519 [INFO][5794] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:51.532097 containerd[1985]: 2025-11-01 00:23:51.526 [WARNING][5794] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" HandleID="k8s-pod-network.1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" Workload="ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-eth0" Nov 1 00:23:51.532097 containerd[1985]: 2025-11-01 00:23:51.526 [INFO][5794] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" HandleID="k8s-pod-network.1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" Workload="ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-eth0" Nov 1 00:23:51.532097 containerd[1985]: 2025-11-01 00:23:51.528 [INFO][5794] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:51.532097 containerd[1985]: 2025-11-01 00:23:51.530 [INFO][5787] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" Nov 1 00:23:51.534282 containerd[1985]: time="2025-11-01T00:23:51.532146861Z" level=info msg="TearDown network for sandbox \"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806\" successfully" Nov 1 00:23:51.534282 containerd[1985]: time="2025-11-01T00:23:51.532171569Z" level=info msg="StopPodSandbox for \"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806\" returns successfully" Nov 1 00:23:51.534282 containerd[1985]: time="2025-11-01T00:23:51.532634803Z" level=info msg="RemovePodSandbox for \"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806\"" Nov 1 00:23:51.534282 containerd[1985]: time="2025-11-01T00:23:51.532668459Z" level=info msg="Forcibly stopping sandbox \"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806\"" Nov 1 00:23:51.615955 containerd[1985]: 2025-11-01 00:23:51.572 [WARNING][5808] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ccf197b2-b2cc-466e-947f-e45189c998df", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"94921b1f321788cca1baef42d5c3d9a933ff365b4029708218f305025cdc31a1", Pod:"coredns-66bc5c9577-cdpgq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali168c71da916", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:51.615955 containerd[1985]: 2025-11-01 00:23:51.572 [INFO][5808] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" Nov 1 00:23:51.615955 containerd[1985]: 2025-11-01 00:23:51.572 [INFO][5808] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" iface="eth0" netns="" Nov 1 00:23:51.615955 containerd[1985]: 2025-11-01 00:23:51.572 [INFO][5808] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" Nov 1 00:23:51.615955 containerd[1985]: 2025-11-01 00:23:51.573 [INFO][5808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" Nov 1 00:23:51.615955 containerd[1985]: 2025-11-01 00:23:51.602 [INFO][5815] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" HandleID="k8s-pod-network.1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" Workload="ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-eth0" Nov 1 00:23:51.615955 containerd[1985]: 2025-11-01 00:23:51.602 [INFO][5815] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:51.615955 containerd[1985]: 2025-11-01 00:23:51.602 [INFO][5815] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:51.615955 containerd[1985]: 2025-11-01 00:23:51.609 [WARNING][5815] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" HandleID="k8s-pod-network.1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" Workload="ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-eth0" Nov 1 00:23:51.615955 containerd[1985]: 2025-11-01 00:23:51.609 [INFO][5815] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" HandleID="k8s-pod-network.1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" Workload="ip--172--31--30--202-k8s-coredns--66bc5c9577--cdpgq-eth0" Nov 1 00:23:51.615955 containerd[1985]: 2025-11-01 00:23:51.611 [INFO][5815] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:51.615955 containerd[1985]: 2025-11-01 00:23:51.613 [INFO][5808] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806" Nov 1 00:23:51.616440 containerd[1985]: time="2025-11-01T00:23:51.615986729Z" level=info msg="TearDown network for sandbox \"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806\" successfully" Nov 1 00:23:51.622573 containerd[1985]: time="2025-11-01T00:23:51.622513015Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:51.622573 containerd[1985]: time="2025-11-01T00:23:51.622580578Z" level=info msg="RemovePodSandbox \"1f1621e2b3402d2b678afe638f7b68f2954ee6c1722762e085f8929924a2b806\" returns successfully" Nov 1 00:23:51.623315 containerd[1985]: time="2025-11-01T00:23:51.623261759Z" level=info msg="StopPodSandbox for \"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8\"" Nov 1 00:23:51.685926 containerd[1985]: time="2025-11-01T00:23:51.685877720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:23:51.732971 containerd[1985]: 2025-11-01 00:23:51.665 [WARNING][5829] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-csi--node--driver--5cfdt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9d66f695-3c82-4cb4-ac8a-5f7c10006e53", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6", Pod:"csi-node-driver-5cfdt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.25.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali71ed63bdbdd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:51.732971 containerd[1985]: 2025-11-01 00:23:51.665 [INFO][5829] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" Nov 1 00:23:51.732971 containerd[1985]: 2025-11-01 00:23:51.665 [INFO][5829] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" iface="eth0" netns="" Nov 1 00:23:51.732971 containerd[1985]: 2025-11-01 00:23:51.666 [INFO][5829] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" Nov 1 00:23:51.732971 containerd[1985]: 2025-11-01 00:23:51.666 [INFO][5829] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" Nov 1 00:23:51.732971 containerd[1985]: 2025-11-01 00:23:51.715 [INFO][5836] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" HandleID="k8s-pod-network.86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" Workload="ip--172--31--30--202-k8s-csi--node--driver--5cfdt-eth0" Nov 1 00:23:51.732971 containerd[1985]: 2025-11-01 00:23:51.715 [INFO][5836] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:51.732971 containerd[1985]: 2025-11-01 00:23:51.715 [INFO][5836] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:51.732971 containerd[1985]: 2025-11-01 00:23:51.725 [WARNING][5836] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" HandleID="k8s-pod-network.86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" Workload="ip--172--31--30--202-k8s-csi--node--driver--5cfdt-eth0" Nov 1 00:23:51.732971 containerd[1985]: 2025-11-01 00:23:51.725 [INFO][5836] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" HandleID="k8s-pod-network.86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" Workload="ip--172--31--30--202-k8s-csi--node--driver--5cfdt-eth0" Nov 1 00:23:51.732971 containerd[1985]: 2025-11-01 00:23:51.728 [INFO][5836] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:51.732971 containerd[1985]: 2025-11-01 00:23:51.729 [INFO][5829] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" Nov 1 00:23:51.732971 containerd[1985]: time="2025-11-01T00:23:51.731668965Z" level=info msg="TearDown network for sandbox \"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8\" successfully" Nov 1 00:23:51.732971 containerd[1985]: time="2025-11-01T00:23:51.731696743Z" level=info msg="StopPodSandbox for \"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8\" returns successfully" Nov 1 00:23:51.734009 containerd[1985]: time="2025-11-01T00:23:51.733675349Z" level=info msg="RemovePodSandbox for \"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8\"" Nov 1 00:23:51.734009 containerd[1985]: time="2025-11-01T00:23:51.733703017Z" level=info msg="Forcibly stopping sandbox \"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8\"" Nov 1 00:23:51.843186 containerd[1985]: 2025-11-01 00:23:51.789 [WARNING][5850] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-csi--node--driver--5cfdt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9d66f695-3c82-4cb4-ac8a-5f7c10006e53", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"bc36fb3420033db00b6f7146d5f5884f8b7ee3419d6ca4c639beb5478e26fec6", Pod:"csi-node-driver-5cfdt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.25.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali71ed63bdbdd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:51.843186 containerd[1985]: 2025-11-01 00:23:51.790 [INFO][5850] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" Nov 1 00:23:51.843186 containerd[1985]: 2025-11-01 00:23:51.790 [INFO][5850] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" iface="eth0" netns="" Nov 1 00:23:51.843186 containerd[1985]: 2025-11-01 00:23:51.790 [INFO][5850] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" Nov 1 00:23:51.843186 containerd[1985]: 2025-11-01 00:23:51.790 [INFO][5850] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" Nov 1 00:23:51.843186 containerd[1985]: 2025-11-01 00:23:51.820 [INFO][5857] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" HandleID="k8s-pod-network.86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" Workload="ip--172--31--30--202-k8s-csi--node--driver--5cfdt-eth0" Nov 1 00:23:51.843186 containerd[1985]: 2025-11-01 00:23:51.821 [INFO][5857] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:51.843186 containerd[1985]: 2025-11-01 00:23:51.821 [INFO][5857] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:51.843186 containerd[1985]: 2025-11-01 00:23:51.834 [WARNING][5857] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" HandleID="k8s-pod-network.86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" Workload="ip--172--31--30--202-k8s-csi--node--driver--5cfdt-eth0" Nov 1 00:23:51.843186 containerd[1985]: 2025-11-01 00:23:51.834 [INFO][5857] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" HandleID="k8s-pod-network.86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" Workload="ip--172--31--30--202-k8s-csi--node--driver--5cfdt-eth0" Nov 1 00:23:51.843186 containerd[1985]: 2025-11-01 00:23:51.838 [INFO][5857] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:51.843186 containerd[1985]: 2025-11-01 00:23:51.841 [INFO][5850] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8" Nov 1 00:23:51.844356 containerd[1985]: time="2025-11-01T00:23:51.843238091Z" level=info msg="TearDown network for sandbox \"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8\" successfully" Nov 1 00:23:51.850050 containerd[1985]: time="2025-11-01T00:23:51.849986304Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:51.850202 containerd[1985]: time="2025-11-01T00:23:51.850059037Z" level=info msg="RemovePodSandbox \"86354cbecfe7486dd444140420eeb375f1273e67dc6a083f659e76c8f03510a8\" returns successfully" Nov 1 00:23:51.850567 containerd[1985]: time="2025-11-01T00:23:51.850524107Z" level=info msg="StopPodSandbox for \"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384\"" Nov 1 00:23:51.936618 containerd[1985]: 2025-11-01 00:23:51.889 [WARNING][5871] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"3d0071e7-dbca-4b76-a432-c8b1bb561ab0", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242", Pod:"goldmane-7c778bb748-qq2mr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.25.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid35df7cc892", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:51.936618 containerd[1985]: 2025-11-01 00:23:51.889 [INFO][5871] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" Nov 1 00:23:51.936618 containerd[1985]: 2025-11-01 00:23:51.889 [INFO][5871] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" iface="eth0" netns="" Nov 1 00:23:51.936618 containerd[1985]: 2025-11-01 00:23:51.890 [INFO][5871] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" Nov 1 00:23:51.936618 containerd[1985]: 2025-11-01 00:23:51.890 [INFO][5871] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" Nov 1 00:23:51.936618 containerd[1985]: 2025-11-01 00:23:51.923 [INFO][5878] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" HandleID="k8s-pod-network.1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" Workload="ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-eth0" Nov 1 00:23:51.936618 containerd[1985]: 2025-11-01 00:23:51.923 [INFO][5878] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:51.936618 containerd[1985]: 2025-11-01 00:23:51.923 [INFO][5878] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:51.936618 containerd[1985]: 2025-11-01 00:23:51.930 [WARNING][5878] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" HandleID="k8s-pod-network.1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" Workload="ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-eth0" Nov 1 00:23:51.936618 containerd[1985]: 2025-11-01 00:23:51.930 [INFO][5878] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" HandleID="k8s-pod-network.1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" Workload="ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-eth0" Nov 1 00:23:51.936618 containerd[1985]: 2025-11-01 00:23:51.932 [INFO][5878] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:51.936618 containerd[1985]: 2025-11-01 00:23:51.934 [INFO][5871] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" Nov 1 00:23:51.938560 containerd[1985]: time="2025-11-01T00:23:51.936679744Z" level=info msg="TearDown network for sandbox \"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384\" successfully" Nov 1 00:23:51.938560 containerd[1985]: time="2025-11-01T00:23:51.936735776Z" level=info msg="StopPodSandbox for \"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384\" returns successfully" Nov 1 00:23:51.938560 containerd[1985]: time="2025-11-01T00:23:51.937247123Z" level=info msg="RemovePodSandbox for \"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384\"" Nov 1 00:23:51.938560 containerd[1985]: time="2025-11-01T00:23:51.937285621Z" level=info msg="Forcibly stopping sandbox \"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384\"" Nov 1 00:23:52.014073 containerd[1985]: 2025-11-01 00:23:51.977 [WARNING][5892] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"3d0071e7-dbca-4b76-a432-c8b1bb561ab0", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"01fe7663389ff5aa96775c46c24fe75dd96eec3a8ab10b60fdc70de25e736242", Pod:"goldmane-7c778bb748-qq2mr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.25.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid35df7cc892", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:52.014073 containerd[1985]: 2025-11-01 00:23:51.977 [INFO][5892] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" Nov 1 00:23:52.014073 containerd[1985]: 2025-11-01 00:23:51.977 [INFO][5892] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" iface="eth0" netns="" Nov 1 00:23:52.014073 containerd[1985]: 2025-11-01 00:23:51.977 [INFO][5892] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" Nov 1 00:23:52.014073 containerd[1985]: 2025-11-01 00:23:51.977 [INFO][5892] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" Nov 1 00:23:52.014073 containerd[1985]: 2025-11-01 00:23:51.999 [INFO][5899] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" HandleID="k8s-pod-network.1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" Workload="ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-eth0" Nov 1 00:23:52.014073 containerd[1985]: 2025-11-01 00:23:51.999 [INFO][5899] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:52.014073 containerd[1985]: 2025-11-01 00:23:51.999 [INFO][5899] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:52.014073 containerd[1985]: 2025-11-01 00:23:52.006 [WARNING][5899] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" HandleID="k8s-pod-network.1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" Workload="ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-eth0" Nov 1 00:23:52.014073 containerd[1985]: 2025-11-01 00:23:52.006 [INFO][5899] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" HandleID="k8s-pod-network.1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" Workload="ip--172--31--30--202-k8s-goldmane--7c778bb748--qq2mr-eth0" Nov 1 00:23:52.014073 containerd[1985]: 2025-11-01 00:23:52.009 [INFO][5899] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:52.014073 containerd[1985]: 2025-11-01 00:23:52.011 [INFO][5892] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384" Nov 1 00:23:52.014073 containerd[1985]: time="2025-11-01T00:23:52.012952179Z" level=info msg="TearDown network for sandbox \"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384\" successfully" Nov 1 00:23:52.020595 containerd[1985]: time="2025-11-01T00:23:52.020540732Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:52.020742 containerd[1985]: time="2025-11-01T00:23:52.020602514Z" level=info msg="RemovePodSandbox \"1a2a3945d4422eb96d81d3404192f641fbcf34bd9b5b0d92b08a9b7b6d1f2384\" returns successfully" Nov 1 00:23:52.021639 containerd[1985]: time="2025-11-01T00:23:52.021241111Z" level=info msg="StopPodSandbox for \"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f\"" Nov 1 00:23:52.046004 containerd[1985]: time="2025-11-01T00:23:52.045955108Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:52.048901 containerd[1985]: time="2025-11-01T00:23:52.048692539Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:23:52.048901 containerd[1985]: time="2025-11-01T00:23:52.048836862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:52.049157 kubelet[3182]: E1101 00:23:52.049032 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:52.049157 kubelet[3182]: E1101 00:23:52.049087 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:52.050686 kubelet[3182]: E1101 00:23:52.049355 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-85c56f6579-hjmzt_calico-system(3b1a064e-eaea-4078-a670-51fea2063bf7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:52.050686 kubelet[3182]: E1101 00:23:52.049404 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85c56f6579-hjmzt" podUID="3b1a064e-eaea-4078-a670-51fea2063bf7" Nov 1 00:23:52.051456 containerd[1985]: time="2025-11-01T00:23:52.050090459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:52.114262 containerd[1985]: 2025-11-01 00:23:52.065 [WARNING][5913] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-eth0", GenerateName:"calico-kube-controllers-85c56f6579-", Namespace:"calico-system", SelfLink:"", UID:"3b1a064e-eaea-4078-a670-51fea2063bf7", ResourceVersion:"1154", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85c56f6579", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020", Pod:"calico-kube-controllers-85c56f6579-hjmzt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali253026eee07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:52.114262 containerd[1985]: 2025-11-01 00:23:52.066 [INFO][5913] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" Nov 1 00:23:52.114262 containerd[1985]: 2025-11-01 00:23:52.066 [INFO][5913] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" iface="eth0" netns="" Nov 1 00:23:52.114262 containerd[1985]: 2025-11-01 00:23:52.066 [INFO][5913] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" Nov 1 00:23:52.114262 containerd[1985]: 2025-11-01 00:23:52.066 [INFO][5913] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" Nov 1 00:23:52.114262 containerd[1985]: 2025-11-01 00:23:52.101 [INFO][5920] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" HandleID="k8s-pod-network.e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" Workload="ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-eth0" Nov 1 00:23:52.114262 containerd[1985]: 2025-11-01 00:23:52.101 [INFO][5920] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:52.114262 containerd[1985]: 2025-11-01 00:23:52.101 [INFO][5920] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:52.114262 containerd[1985]: 2025-11-01 00:23:52.108 [WARNING][5920] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" HandleID="k8s-pod-network.e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" Workload="ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-eth0" Nov 1 00:23:52.114262 containerd[1985]: 2025-11-01 00:23:52.108 [INFO][5920] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" HandleID="k8s-pod-network.e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" Workload="ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-eth0" Nov 1 00:23:52.114262 containerd[1985]: 2025-11-01 00:23:52.110 [INFO][5920] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:52.114262 containerd[1985]: 2025-11-01 00:23:52.112 [INFO][5913] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" Nov 1 00:23:52.114833 containerd[1985]: time="2025-11-01T00:23:52.114220998Z" level=info msg="TearDown network for sandbox \"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f\" successfully" Nov 1 00:23:52.114833 containerd[1985]: time="2025-11-01T00:23:52.114815686Z" level=info msg="StopPodSandbox for \"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f\" returns successfully" Nov 1 00:23:52.115528 containerd[1985]: time="2025-11-01T00:23:52.115471971Z" level=info msg="RemovePodSandbox for \"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f\"" Nov 1 00:23:52.115528 containerd[1985]: time="2025-11-01T00:23:52.115504286Z" level=info msg="Forcibly stopping sandbox \"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f\"" Nov 1 00:23:52.203571 containerd[1985]: 2025-11-01 00:23:52.158 [WARNING][5934] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-eth0", GenerateName:"calico-kube-controllers-85c56f6579-", Namespace:"calico-system", SelfLink:"", UID:"3b1a064e-eaea-4078-a670-51fea2063bf7", ResourceVersion:"1154", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85c56f6579", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-202", ContainerID:"11bde11a084c297b3dda672472937a5c2988473976952beaa24d006d0a6eb020", Pod:"calico-kube-controllers-85c56f6579-hjmzt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali253026eee07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:52.203571 containerd[1985]: 2025-11-01 00:23:52.158 [INFO][5934] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" Nov 1 00:23:52.203571 containerd[1985]: 2025-11-01 00:23:52.158 [INFO][5934] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" iface="eth0" netns="" Nov 1 00:23:52.203571 containerd[1985]: 2025-11-01 00:23:52.158 [INFO][5934] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" Nov 1 00:23:52.203571 containerd[1985]: 2025-11-01 00:23:52.158 [INFO][5934] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" Nov 1 00:23:52.203571 containerd[1985]: 2025-11-01 00:23:52.185 [INFO][5941] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" HandleID="k8s-pod-network.e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" Workload="ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-eth0" Nov 1 00:23:52.203571 containerd[1985]: 2025-11-01 00:23:52.185 [INFO][5941] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:52.203571 containerd[1985]: 2025-11-01 00:23:52.185 [INFO][5941] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:52.203571 containerd[1985]: 2025-11-01 00:23:52.196 [WARNING][5941] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" HandleID="k8s-pod-network.e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" Workload="ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-eth0" Nov 1 00:23:52.203571 containerd[1985]: 2025-11-01 00:23:52.197 [INFO][5941] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" HandleID="k8s-pod-network.e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" Workload="ip--172--31--30--202-k8s-calico--kube--controllers--85c56f6579--hjmzt-eth0" Nov 1 00:23:52.203571 containerd[1985]: 2025-11-01 00:23:52.199 [INFO][5941] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:52.203571 containerd[1985]: 2025-11-01 00:23:52.201 [INFO][5934] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f" Nov 1 00:23:52.204226 containerd[1985]: time="2025-11-01T00:23:52.203608334Z" level=info msg="TearDown network for sandbox \"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f\" successfully" Nov 1 00:23:52.210337 containerd[1985]: time="2025-11-01T00:23:52.210285445Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:52.210469 containerd[1985]: time="2025-11-01T00:23:52.210363023Z" level=info msg="RemovePodSandbox \"e6e4ead0d07b9a42441eb1b9dbafeae524712060192fa76e14b0812e1b910b9f\" returns successfully" Nov 1 00:23:52.312959 containerd[1985]: time="2025-11-01T00:23:52.312814602Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:52.314955 containerd[1985]: time="2025-11-01T00:23:52.314893123Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:52.315239 containerd[1985]: time="2025-11-01T00:23:52.314917218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:52.315411 kubelet[3182]: E1101 00:23:52.315355 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:52.315496 kubelet[3182]: E1101 00:23:52.315418 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:52.315773 kubelet[3182]: E1101 00:23:52.315736 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67d9f69bfb-kcfrc_calico-apiserver(29fc9071-7019-4315-907a-15289e1e3c38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:52.315885 kubelet[3182]: E1101 00:23:52.315791 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-kcfrc" podUID="29fc9071-7019-4315-907a-15289e1e3c38" Nov 1 00:23:52.317176 containerd[1985]: time="2025-11-01T00:23:52.317139550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:23:52.570201 containerd[1985]: time="2025-11-01T00:23:52.570036758Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:52.572555 containerd[1985]: time="2025-11-01T00:23:52.572485707Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:23:52.572765 containerd[1985]: time="2025-11-01T00:23:52.572597362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:23:52.572848 kubelet[3182]: E1101 00:23:52.572805 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:52.572906 kubelet[3182]: E1101 00:23:52.572853 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:52.572988 kubelet[3182]: E1101 00:23:52.572956 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6659dc5f84-8hw6r_calico-system(7f37928f-30fa-48de-9724-092e451da4bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:52.574766 containerd[1985]: time="2025-11-01T00:23:52.574735853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:23:52.815580 containerd[1985]: time="2025-11-01T00:23:52.815424662Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:52.817682 containerd[1985]: time="2025-11-01T00:23:52.817632538Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:23:52.817926 containerd[1985]: time="2025-11-01T00:23:52.817669132Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:52.817963 kubelet[3182]: E1101 00:23:52.817883 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:52.817963 kubelet[3182]: E1101 00:23:52.817923 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:52.818039 kubelet[3182]: E1101 00:23:52.817997 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6659dc5f84-8hw6r_calico-system(7f37928f-30fa-48de-9724-092e451da4bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:52.818082 kubelet[3182]: E1101 00:23:52.818036 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6659dc5f84-8hw6r" podUID="7f37928f-30fa-48de-9724-092e451da4bf" Nov 1 00:23:53.681426 containerd[1985]: time="2025-11-01T00:23:53.681387562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:53.928649 containerd[1985]: time="2025-11-01T00:23:53.928596324Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:53.930965 containerd[1985]: time="2025-11-01T00:23:53.930840773Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:53.930965 containerd[1985]: time="2025-11-01T00:23:53.930912792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:53.931288 kubelet[3182]: E1101 00:23:53.931241 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:53.931784 kubelet[3182]: E1101 00:23:53.931340 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:53.931784 kubelet[3182]: E1101 00:23:53.931477 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67d9f69bfb-mczl8_calico-apiserver(a4244289-0ea7-4d4f-a667-210bd4cdc63c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:53.931784 kubelet[3182]: E1101 00:23:53.931529 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-mczl8" podUID="a4244289-0ea7-4d4f-a667-210bd4cdc63c" Nov 1 00:23:54.022855 systemd[1]: Started sshd@8-172.31.30.202:22-139.178.89.65:55626.service - OpenSSH per-connection server daemon (139.178.89.65:55626). Nov 1 00:23:54.237826 sshd[5948]: Accepted publickey for core from 139.178.89.65 port 55626 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:23:54.242178 sshd[5948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:54.247999 systemd-logind[1963]: New session 9 of user core. Nov 1 00:23:54.253022 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 00:23:54.566176 sshd[5948]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:54.571687 systemd[1]: sshd@8-172.31.30.202:22-139.178.89.65:55626.service: Deactivated successfully. Nov 1 00:23:54.574814 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:23:54.576062 systemd-logind[1963]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:23:54.577174 systemd-logind[1963]: Removed session 9. Nov 1 00:23:55.682401 containerd[1985]: time="2025-11-01T00:23:55.682030378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:23:55.941361 containerd[1985]: time="2025-11-01T00:23:55.941228459Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:55.943333 containerd[1985]: time="2025-11-01T00:23:55.943261025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:23:55.943444 containerd[1985]: time="2025-11-01T00:23:55.943351009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:23:55.943545 kubelet[3182]: E1101 00:23:55.943508 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:55.943868 kubelet[3182]: E1101 00:23:55.943553 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:55.943868 kubelet[3182]: E1101 00:23:55.943617 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-5cfdt_calico-system(9d66f695-3c82-4cb4-ac8a-5f7c10006e53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:55.944775 containerd[1985]: time="2025-11-01T00:23:55.944749251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:23:56.183429 containerd[1985]: time="2025-11-01T00:23:56.183373647Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:56.186331 containerd[1985]: time="2025-11-01T00:23:56.185552022Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:23:56.186331 containerd[1985]: time="2025-11-01T00:23:56.185663753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:23:56.186532 kubelet[3182]: E1101 00:23:56.186011 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:56.186532 kubelet[3182]: E1101 00:23:56.186061 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:56.186532 kubelet[3182]: E1101 00:23:56.186145 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-5cfdt_calico-system(9d66f695-3c82-4cb4-ac8a-5f7c10006e53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:56.188464 kubelet[3182]: E1101 00:23:56.186199 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cfdt" podUID="9d66f695-3c82-4cb4-ac8a-5f7c10006e53" Nov 1 00:23:56.684115 containerd[1985]: time="2025-11-01T00:23:56.684077942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:23:56.936389 containerd[1985]: time="2025-11-01T00:23:56.936112805Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:56.938164 containerd[1985]: time="2025-11-01T00:23:56.938110698Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:23:56.938305 containerd[1985]: time="2025-11-01T00:23:56.938209603Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:56.938424 kubelet[3182]: E1101 00:23:56.938388 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:56.938498 kubelet[3182]: E1101 00:23:56.938435 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:56.938547 kubelet[3182]: E1101 00:23:56.938526 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-qq2mr_calico-system(3d0071e7-dbca-4b76-a432-c8b1bb561ab0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:56.938605 kubelet[3182]: E1101 00:23:56.938566 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qq2mr" podUID="3d0071e7-dbca-4b76-a432-c8b1bb561ab0" Nov 1 00:23:59.610149 systemd[1]: Started sshd@9-172.31.30.202:22-139.178.89.65:42278.service - OpenSSH per-connection server daemon (139.178.89.65:42278). Nov 1 00:23:59.768020 sshd[5972]: Accepted publickey for core from 139.178.89.65 port 42278 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:23:59.769530 sshd[5972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:59.774692 systemd-logind[1963]: New session 10 of user core. Nov 1 00:23:59.777994 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 00:23:59.982204 sshd[5972]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:59.986863 systemd[1]: sshd@9-172.31.30.202:22-139.178.89.65:42278.service: Deactivated successfully. Nov 1 00:23:59.989695 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:23:59.990566 systemd-logind[1963]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:23:59.991916 systemd-logind[1963]: Removed session 10. Nov 1 00:24:04.686147 kubelet[3182]: E1101 00:24:04.684649 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85c56f6579-hjmzt" podUID="3b1a064e-eaea-4078-a670-51fea2063bf7" Nov 1 00:24:05.023090 systemd[1]: Started sshd@10-172.31.30.202:22-139.178.89.65:42284.service - OpenSSH per-connection server daemon (139.178.89.65:42284). Nov 1 00:24:05.135977 systemd[1]: run-containerd-runc-k8s.io-63401efe00da6e4c6224a7081c99241ada4ba6b0a2ab13a06a94bbc2f7aff01b-runc.9pWp9h.mount: Deactivated successfully. Nov 1 00:24:05.187851 sshd[5992]: Accepted publickey for core from 139.178.89.65 port 42284 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:24:05.190468 sshd[5992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:05.200272 systemd-logind[1963]: New session 11 of user core. Nov 1 00:24:05.201951 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 00:24:05.503297 sshd[5992]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:05.507360 systemd[1]: sshd@10-172.31.30.202:22-139.178.89.65:42284.service: Deactivated successfully. Nov 1 00:24:05.509309 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:24:05.510424 systemd-logind[1963]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:24:05.512178 systemd-logind[1963]: Removed session 11. Nov 1 00:24:05.540818 systemd[1]: Started sshd@11-172.31.30.202:22-139.178.89.65:42288.service - OpenSSH per-connection server daemon (139.178.89.65:42288). Nov 1 00:24:05.682791 kubelet[3182]: E1101 00:24:05.681975 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-mczl8" podUID="a4244289-0ea7-4d4f-a667-210bd4cdc63c" Nov 1 00:24:05.715847 sshd[6029]: Accepted publickey for core from 139.178.89.65 port 42288 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:24:05.717337 sshd[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:05.722629 systemd-logind[1963]: New session 12 of user core. Nov 1 00:24:05.731098 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 00:24:06.002069 sshd[6029]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:06.005515 systemd-logind[1963]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:24:06.007312 systemd[1]: sshd@11-172.31.30.202:22-139.178.89.65:42288.service: Deactivated successfully. Nov 1 00:24:06.012526 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:24:06.017579 systemd-logind[1963]: Removed session 12. Nov 1 00:24:06.041590 systemd[1]: Started sshd@12-172.31.30.202:22-139.178.89.65:36936.service - OpenSSH per-connection server daemon (139.178.89.65:36936). Nov 1 00:24:06.199594 sshd[6040]: Accepted publickey for core from 139.178.89.65 port 36936 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:24:06.201322 sshd[6040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:06.207329 systemd-logind[1963]: New session 13 of user core. Nov 1 00:24:06.213974 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 00:24:06.434455 sshd[6040]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:06.438411 systemd[1]: sshd@12-172.31.30.202:22-139.178.89.65:36936.service: Deactivated successfully. Nov 1 00:24:06.440539 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:24:06.441688 systemd-logind[1963]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:24:06.442783 systemd-logind[1963]: Removed session 13. Nov 1 00:24:06.684903 kubelet[3182]: E1101 00:24:06.683954 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-kcfrc" podUID="29fc9071-7019-4315-907a-15289e1e3c38" Nov 1 00:24:07.684960 kubelet[3182]: E1101 00:24:07.684907 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6659dc5f84-8hw6r" podUID="7f37928f-30fa-48de-9724-092e451da4bf" Nov 1 00:24:10.685794 kubelet[3182]: E1101 00:24:10.684903 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qq2mr" podUID="3d0071e7-dbca-4b76-a432-c8b1bb561ab0" Nov 1 00:24:11.471231 systemd[1]: Started sshd@13-172.31.30.202:22-139.178.89.65:36950.service - OpenSSH per-connection server daemon (139.178.89.65:36950). Nov 1 00:24:11.621700 sshd[6057]: Accepted publickey for core from 139.178.89.65 port 36950 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:24:11.623300 sshd[6057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:11.629069 systemd-logind[1963]: New session 14 of user core. Nov 1 00:24:11.634017 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 00:24:11.684753 kubelet[3182]: E1101 00:24:11.683681 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cfdt" podUID="9d66f695-3c82-4cb4-ac8a-5f7c10006e53" Nov 1 00:24:11.853588 sshd[6057]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:11.857814 systemd[1]: sshd@13-172.31.30.202:22-139.178.89.65:36950.service: Deactivated successfully. Nov 1 00:24:11.858206 systemd-logind[1963]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:24:11.860952 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:24:11.863642 systemd-logind[1963]: Removed session 14. Nov 1 00:24:16.688464 containerd[1985]: time="2025-11-01T00:24:16.688036433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:24:16.890000 systemd[1]: Started sshd@14-172.31.30.202:22-139.178.89.65:33654.service - OpenSSH per-connection server daemon (139.178.89.65:33654). Nov 1 00:24:16.950782 containerd[1985]: time="2025-11-01T00:24:16.950596927Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:16.952771 containerd[1985]: time="2025-11-01T00:24:16.952630201Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:24:16.952771 containerd[1985]: time="2025-11-01T00:24:16.952704521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:24:16.952986 kubelet[3182]: E1101 00:24:16.952941 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:24:16.953296 kubelet[3182]: E1101 00:24:16.952990 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:24:16.953296 kubelet[3182]: E1101 00:24:16.953066 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-85c56f6579-hjmzt_calico-system(3b1a064e-eaea-4078-a670-51fea2063bf7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:16.953296 kubelet[3182]: E1101 00:24:16.953096 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85c56f6579-hjmzt" podUID="3b1a064e-eaea-4078-a670-51fea2063bf7" Nov 1 00:24:17.102389 sshd[6070]: Accepted publickey for core from 139.178.89.65 port 33654 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:24:17.110807 sshd[6070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:17.116473 systemd-logind[1963]: New session 15 of user core. Nov 1 00:24:17.121115 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 00:24:17.454233 sshd[6070]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:17.458155 systemd-logind[1963]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:24:17.459136 systemd[1]: sshd@14-172.31.30.202:22-139.178.89.65:33654.service: Deactivated successfully. Nov 1 00:24:17.461167 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:24:17.462272 systemd-logind[1963]: Removed session 15. Nov 1 00:24:19.683787 containerd[1985]: time="2025-11-01T00:24:19.683418776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:24:19.942734 containerd[1985]: time="2025-11-01T00:24:19.942594451Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:19.945088 containerd[1985]: time="2025-11-01T00:24:19.944968905Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:24:19.945088 containerd[1985]: time="2025-11-01T00:24:19.945019161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:24:19.945352 kubelet[3182]: E1101 00:24:19.945242 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:19.945352 kubelet[3182]: E1101 00:24:19.945294 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:19.945843 kubelet[3182]: E1101 00:24:19.945386 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67d9f69bfb-mczl8_calico-apiserver(a4244289-0ea7-4d4f-a667-210bd4cdc63c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:19.945843 kubelet[3182]: E1101 00:24:19.945430 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-mczl8" podUID="a4244289-0ea7-4d4f-a667-210bd4cdc63c" Nov 1 00:24:20.693945 containerd[1985]: time="2025-11-01T00:24:20.693660436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:24:20.950268 containerd[1985]: time="2025-11-01T00:24:20.950133403Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:20.952418 containerd[1985]: time="2025-11-01T00:24:20.952311855Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:24:20.952418 containerd[1985]: time="2025-11-01T00:24:20.952358192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:24:20.952631 kubelet[3182]: E1101 00:24:20.952526 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:24:20.952631 kubelet[3182]: E1101 00:24:20.952578 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:24:20.953978 kubelet[3182]: E1101 00:24:20.952657 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6659dc5f84-8hw6r_calico-system(7f37928f-30fa-48de-9724-092e451da4bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:20.954419 containerd[1985]: time="2025-11-01T00:24:20.954389069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:24:21.198633 containerd[1985]: time="2025-11-01T00:24:21.198582017Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:21.202599 containerd[1985]: time="2025-11-01T00:24:21.201248902Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:24:21.202599 containerd[1985]: time="2025-11-01T00:24:21.201285941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:24:21.202775 kubelet[3182]: E1101 00:24:21.201851 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:24:21.202775 kubelet[3182]: E1101 00:24:21.201897 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:24:21.202775 kubelet[3182]: E1101 00:24:21.201990 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6659dc5f84-8hw6r_calico-system(7f37928f-30fa-48de-9724-092e451da4bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:21.203049 kubelet[3182]: E1101 00:24:21.202028 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6659dc5f84-8hw6r" podUID="7f37928f-30fa-48de-9724-092e451da4bf" Nov 1 00:24:21.682100 containerd[1985]: time="2025-11-01T00:24:21.682028198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:24:21.935220 containerd[1985]: time="2025-11-01T00:24:21.935090253Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:21.937326 containerd[1985]: time="2025-11-01T00:24:21.937209068Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:24:21.937326 containerd[1985]: time="2025-11-01T00:24:21.937268611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:24:21.937462 kubelet[3182]: E1101 00:24:21.937429 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:21.937507 kubelet[3182]: E1101 00:24:21.937465 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:21.937563 kubelet[3182]: E1101 00:24:21.937534 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67d9f69bfb-kcfrc_calico-apiserver(29fc9071-7019-4315-907a-15289e1e3c38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:21.937598 kubelet[3182]: E1101 00:24:21.937572 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-kcfrc" podUID="29fc9071-7019-4315-907a-15289e1e3c38" Nov 1 00:24:22.495281 systemd[1]: Started sshd@15-172.31.30.202:22-139.178.89.65:33660.service - OpenSSH per-connection server daemon (139.178.89.65:33660). Nov 1 00:24:22.690560 sshd[6086]: Accepted publickey for core from 139.178.89.65 port 33660 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:24:22.691856 sshd[6086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:22.699384 systemd-logind[1963]: New session 16 of user core. Nov 1 00:24:22.703987 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 00:24:23.128501 sshd[6086]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:23.136399 systemd[1]: sshd@15-172.31.30.202:22-139.178.89.65:33660.service: Deactivated successfully. Nov 1 00:24:23.139851 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:24:23.141887 systemd-logind[1963]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:24:23.143804 systemd-logind[1963]: Removed session 16. Nov 1 00:24:23.681974 containerd[1985]: time="2025-11-01T00:24:23.681735502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:24:23.935904 containerd[1985]: time="2025-11-01T00:24:23.935611754Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:23.937840 containerd[1985]: time="2025-11-01T00:24:23.937786396Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:24:23.938011 containerd[1985]: time="2025-11-01T00:24:23.937825847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:24:23.938066 kubelet[3182]: E1101 00:24:23.938023 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:24:23.938383 kubelet[3182]: E1101 00:24:23.938064 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:24:23.938383 kubelet[3182]: E1101 00:24:23.938257 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-5cfdt_calico-system(9d66f695-3c82-4cb4-ac8a-5f7c10006e53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:23.939407 containerd[1985]: time="2025-11-01T00:24:23.939027974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:24:24.194448 containerd[1985]: time="2025-11-01T00:24:24.194306434Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:24.197037 containerd[1985]: time="2025-11-01T00:24:24.196394835Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:24:24.197907 containerd[1985]: time="2025-11-01T00:24:24.196989791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:24:24.197993 kubelet[3182]: E1101 00:24:24.197790 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:24:24.197993 kubelet[3182]: E1101 00:24:24.197859 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:24:24.198677 kubelet[3182]: E1101 00:24:24.198093 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-qq2mr_calico-system(3d0071e7-dbca-4b76-a432-c8b1bb561ab0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:24.198677 kubelet[3182]: E1101 00:24:24.198138 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qq2mr" podUID="3d0071e7-dbca-4b76-a432-c8b1bb561ab0" Nov 1 00:24:24.199001 containerd[1985]: time="2025-11-01T00:24:24.198373742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:24:24.444767 containerd[1985]: time="2025-11-01T00:24:24.444619957Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:24.447050 containerd[1985]: time="2025-11-01T00:24:24.446983040Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:24:24.447179 containerd[1985]: time="2025-11-01T00:24:24.446988922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:24:24.447401 kubelet[3182]: E1101 00:24:24.447365 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:24:24.447466 kubelet[3182]: E1101 00:24:24.447409 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:24:24.447497 kubelet[3182]: E1101 00:24:24.447479 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-5cfdt_calico-system(9d66f695-3c82-4cb4-ac8a-5f7c10006e53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:24.447563 kubelet[3182]: E1101 00:24:24.447516 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cfdt" podUID="9d66f695-3c82-4cb4-ac8a-5f7c10006e53" Nov 1 00:24:28.161065 systemd[1]: Started sshd@16-172.31.30.202:22-139.178.89.65:52200.service - OpenSSH per-connection server daemon (139.178.89.65:52200). Nov 1 00:24:28.344321 sshd[6109]: Accepted publickey for core from 139.178.89.65 port 52200 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:24:28.347317 sshd[6109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:28.354738 systemd-logind[1963]: New session 17 of user core. Nov 1 00:24:28.357987 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 00:24:28.618104 sshd[6109]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:28.621641 systemd-logind[1963]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:24:28.623039 systemd[1]: sshd@16-172.31.30.202:22-139.178.89.65:52200.service: Deactivated successfully. Nov 1 00:24:28.625718 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:24:28.627074 systemd-logind[1963]: Removed session 17. Nov 1 00:24:28.649129 systemd[1]: Started sshd@17-172.31.30.202:22-139.178.89.65:52216.service - OpenSSH per-connection server daemon (139.178.89.65:52216). Nov 1 00:24:28.829397 sshd[6122]: Accepted publickey for core from 139.178.89.65 port 52216 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:24:28.831440 sshd[6122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:28.837591 systemd-logind[1963]: New session 18 of user core. Nov 1 00:24:28.842015 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 00:24:29.587559 sshd[6122]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:29.602851 systemd[1]: sshd@17-172.31.30.202:22-139.178.89.65:52216.service: Deactivated successfully. Nov 1 00:24:29.607673 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:24:29.614512 systemd-logind[1963]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:24:29.632158 systemd[1]: Started sshd@18-172.31.30.202:22-139.178.89.65:52232.service - OpenSSH per-connection server daemon (139.178.89.65:52232). Nov 1 00:24:29.633687 systemd-logind[1963]: Removed session 18. Nov 1 00:24:29.828979 sshd[6133]: Accepted publickey for core from 139.178.89.65 port 52232 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:24:29.832621 sshd[6133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:29.839838 systemd-logind[1963]: New session 19 of user core. Nov 1 00:24:29.845001 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 00:24:30.644640 sshd[6133]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:30.650201 systemd-logind[1963]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:24:30.652066 systemd[1]: sshd@18-172.31.30.202:22-139.178.89.65:52232.service: Deactivated successfully. Nov 1 00:24:30.654654 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:24:30.655681 systemd-logind[1963]: Removed session 19. Nov 1 00:24:30.684969 systemd[1]: Started sshd@19-172.31.30.202:22-139.178.89.65:52240.service - OpenSSH per-connection server daemon (139.178.89.65:52240). Nov 1 00:24:30.693438 kubelet[3182]: E1101 00:24:30.693259 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85c56f6579-hjmzt" podUID="3b1a064e-eaea-4078-a670-51fea2063bf7" Nov 1 00:24:30.891510 sshd[6148]: Accepted publickey for core from 139.178.89.65 port 52240 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:24:30.893835 sshd[6148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:30.900619 systemd-logind[1963]: New session 20 of user core. Nov 1 00:24:30.911324 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 00:24:31.679152 sshd[6148]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:31.686014 systemd[1]: sshd@19-172.31.30.202:22-139.178.89.65:52240.service: Deactivated successfully. Nov 1 00:24:31.690198 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:24:31.691808 systemd-logind[1963]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:24:31.693531 systemd-logind[1963]: Removed session 20. Nov 1 00:24:31.720111 systemd[1]: Started sshd@20-172.31.30.202:22-139.178.89.65:52250.service - OpenSSH per-connection server daemon (139.178.89.65:52250). Nov 1 00:24:31.921269 sshd[6161]: Accepted publickey for core from 139.178.89.65 port 52250 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:24:31.923531 sshd[6161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:31.930401 systemd-logind[1963]: New session 21 of user core. Nov 1 00:24:31.949017 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 00:24:32.184752 sshd[6161]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:32.190814 systemd[1]: sshd@20-172.31.30.202:22-139.178.89.65:52250.service: Deactivated successfully. Nov 1 00:24:32.193595 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:24:32.194630 systemd-logind[1963]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:24:32.196509 systemd-logind[1963]: Removed session 21. Nov 1 00:24:33.682963 kubelet[3182]: E1101 00:24:33.682612 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6659dc5f84-8hw6r" podUID="7f37928f-30fa-48de-9724-092e451da4bf" Nov 1 00:24:35.237966 systemd[1]: run-containerd-runc-k8s.io-63401efe00da6e4c6224a7081c99241ada4ba6b0a2ab13a06a94bbc2f7aff01b-runc.BLVgJX.mount: Deactivated successfully. Nov 1 00:24:35.683190 kubelet[3182]: E1101 00:24:35.682584 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-kcfrc" podUID="29fc9071-7019-4315-907a-15289e1e3c38" Nov 1 00:24:35.683190 kubelet[3182]: E1101 00:24:35.682584 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-mczl8" podUID="a4244289-0ea7-4d4f-a667-210bd4cdc63c" Nov 1 00:24:37.223311 systemd[1]: Started sshd@21-172.31.30.202:22-139.178.89.65:60226.service - OpenSSH per-connection server daemon (139.178.89.65:60226). Nov 1 00:24:37.402924 sshd[6197]: Accepted publickey for core from 139.178.89.65 port 60226 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:24:37.405048 sshd[6197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:37.411909 systemd-logind[1963]: New session 22 of user core. Nov 1 00:24:37.419406 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 00:24:37.640601 sshd[6197]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:37.644621 systemd[1]: sshd@21-172.31.30.202:22-139.178.89.65:60226.service: Deactivated successfully. Nov 1 00:24:37.647295 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:24:37.649639 systemd-logind[1963]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:24:37.651482 systemd-logind[1963]: Removed session 22. Nov 1 00:24:37.682584 kubelet[3182]: E1101 00:24:37.682151 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qq2mr" podUID="3d0071e7-dbca-4b76-a432-c8b1bb561ab0" Nov 1 00:24:39.682991 kubelet[3182]: E1101 00:24:39.682401 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cfdt" podUID="9d66f695-3c82-4cb4-ac8a-5f7c10006e53" Nov 1 00:24:42.682098 systemd[1]: Started sshd@22-172.31.30.202:22-139.178.89.65:60238.service - OpenSSH per-connection server daemon (139.178.89.65:60238). Nov 1 00:24:42.866163 sshd[6210]: Accepted publickey for core from 139.178.89.65 port 60238 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:24:42.868322 sshd[6210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:42.878599 systemd-logind[1963]: New session 23 of user core. Nov 1 00:24:42.883986 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 00:24:43.211058 sshd[6210]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:43.217552 systemd[1]: sshd@22-172.31.30.202:22-139.178.89.65:60238.service: Deactivated successfully. Nov 1 00:24:43.222824 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:24:43.225841 systemd-logind[1963]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:24:43.228196 systemd-logind[1963]: Removed session 23. Nov 1 00:24:43.684240 kubelet[3182]: E1101 00:24:43.684170 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85c56f6579-hjmzt" podUID="3b1a064e-eaea-4078-a670-51fea2063bf7" Nov 1 00:24:44.688645 kubelet[3182]: E1101 00:24:44.688531 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6659dc5f84-8hw6r" podUID="7f37928f-30fa-48de-9724-092e451da4bf" Nov 1 00:24:46.689738 kubelet[3182]: E1101 00:24:46.689382 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-kcfrc" podUID="29fc9071-7019-4315-907a-15289e1e3c38" Nov 1 00:24:48.251148 systemd[1]: Started sshd@23-172.31.30.202:22-139.178.89.65:44810.service - OpenSSH per-connection server daemon (139.178.89.65:44810). Nov 1 00:24:48.450983 sshd[6222]: Accepted publickey for core from 139.178.89.65 port 44810 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:24:48.454614 sshd[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:48.462159 systemd-logind[1963]: New session 24 of user core. Nov 1 00:24:48.472003 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 1 00:24:48.685925 kubelet[3182]: E1101 00:24:48.684676 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-mczl8" podUID="a4244289-0ea7-4d4f-a667-210bd4cdc63c" Nov 1 00:24:48.955000 sshd[6222]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:48.960443 systemd[1]: sshd@23-172.31.30.202:22-139.178.89.65:44810.service: Deactivated successfully. Nov 1 00:24:48.964249 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:24:48.967186 systemd-logind[1963]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:24:48.969363 systemd-logind[1963]: Removed session 24. Nov 1 00:24:49.682665 kubelet[3182]: E1101 00:24:49.682463 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qq2mr" podUID="3d0071e7-dbca-4b76-a432-c8b1bb561ab0" Nov 1 00:24:50.693935 kubelet[3182]: E1101 00:24:50.693754 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cfdt" podUID="9d66f695-3c82-4cb4-ac8a-5f7c10006e53" Nov 1 00:24:53.999427 systemd[1]: Started sshd@24-172.31.30.202:22-139.178.89.65:44812.service - OpenSSH per-connection server daemon (139.178.89.65:44812). Nov 1 00:24:54.204071 sshd[6237]: Accepted publickey for core from 139.178.89.65 port 44812 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:24:54.211029 sshd[6237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:54.223333 systemd-logind[1963]: New session 25 of user core. Nov 1 00:24:54.226568 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 1 00:24:54.648389 sshd[6237]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:54.655189 systemd-logind[1963]: Session 25 logged out. Waiting for processes to exit. Nov 1 00:24:54.657879 systemd[1]: sshd@24-172.31.30.202:22-139.178.89.65:44812.service: Deactivated successfully. Nov 1 00:24:54.662102 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 00:24:54.664455 systemd-logind[1963]: Removed session 25. Nov 1 00:24:54.683518 kubelet[3182]: E1101 00:24:54.683263 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85c56f6579-hjmzt" podUID="3b1a064e-eaea-4078-a670-51fea2063bf7" Nov 1 00:24:56.686615 kubelet[3182]: E1101 00:24:56.686561 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6659dc5f84-8hw6r" podUID="7f37928f-30fa-48de-9724-092e451da4bf" Nov 1 00:24:57.681929 kubelet[3182]: E1101 00:24:57.681878 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-kcfrc" podUID="29fc9071-7019-4315-907a-15289e1e3c38" Nov 1 00:24:59.681040 systemd[1]: Started sshd@25-172.31.30.202:22-139.178.89.65:47622.service - OpenSSH per-connection server daemon (139.178.89.65:47622). Nov 1 00:24:59.843997 sshd[6252]: Accepted publickey for core from 139.178.89.65 port 47622 ssh2: RSA SHA256:55bSG+h4mODlbkX7Nhwtenl3SmfUgZaSzRsbs+4SxJs Nov 1 00:24:59.845441 sshd[6252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:59.849934 systemd-logind[1963]: New session 26 of user core. Nov 1 00:24:59.852970 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 1 00:25:00.230804 sshd[6252]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:00.237582 systemd-logind[1963]: Session 26 logged out. Waiting for processes to exit. Nov 1 00:25:00.238790 systemd[1]: sshd@25-172.31.30.202:22-139.178.89.65:47622.service: Deactivated successfully. Nov 1 00:25:00.242966 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 00:25:00.249379 systemd-logind[1963]: Removed session 26. Nov 1 00:25:01.854828 kubelet[3182]: E1101 00:25:01.853385 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qq2mr" podUID="3d0071e7-dbca-4b76-a432-c8b1bb561ab0" Nov 1 00:25:01.855938 kubelet[3182]: E1101 00:25:01.855867 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cfdt" podUID="9d66f695-3c82-4cb4-ac8a-5f7c10006e53" Nov 1 00:25:02.686160 containerd[1985]: time="2025-11-01T00:25:02.686116902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:25:02.970880 containerd[1985]: time="2025-11-01T00:25:02.970206526Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:25:02.978477 containerd[1985]: time="2025-11-01T00:25:02.978343240Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:25:02.978916 containerd[1985]: time="2025-11-01T00:25:02.978382338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:25:02.979097 kubelet[3182]: E1101 00:25:02.979041 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:25:02.979508 kubelet[3182]: E1101 00:25:02.979093 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:25:02.979508 kubelet[3182]: E1101 00:25:02.979192 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67d9f69bfb-mczl8_calico-apiserver(a4244289-0ea7-4d4f-a667-210bd4cdc63c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:25:02.979508 kubelet[3182]: E1101 00:25:02.979244 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-mczl8" podUID="a4244289-0ea7-4d4f-a667-210bd4cdc63c" Nov 1 00:25:05.106821 systemd[1]: run-containerd-runc-k8s.io-63401efe00da6e4c6224a7081c99241ada4ba6b0a2ab13a06a94bbc2f7aff01b-runc.JK9I61.mount: Deactivated successfully. Nov 1 00:25:07.683141 containerd[1985]: time="2025-11-01T00:25:07.682750980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:25:07.940661 containerd[1985]: time="2025-11-01T00:25:07.940528327Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:25:07.943061 containerd[1985]: time="2025-11-01T00:25:07.942897595Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:25:07.943061 containerd[1985]: time="2025-11-01T00:25:07.943002118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:25:07.943260 kubelet[3182]: E1101 00:25:07.943215 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:25:07.943655 kubelet[3182]: E1101 00:25:07.943269 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:25:07.943655 kubelet[3182]: E1101 00:25:07.943357 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6659dc5f84-8hw6r_calico-system(7f37928f-30fa-48de-9724-092e451da4bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:25:07.944635 containerd[1985]: time="2025-11-01T00:25:07.944602669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:25:08.169527 containerd[1985]: time="2025-11-01T00:25:08.169483917Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:25:08.171789 containerd[1985]: time="2025-11-01T00:25:08.171711162Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:25:08.172072 containerd[1985]: time="2025-11-01T00:25:08.171858018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:25:08.172120 kubelet[3182]: E1101 00:25:08.172042 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:25:08.172120 kubelet[3182]: E1101 00:25:08.172095 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:25:08.172200 kubelet[3182]: E1101 00:25:08.172171 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6659dc5f84-8hw6r_calico-system(7f37928f-30fa-48de-9724-092e451da4bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:25:08.172254 kubelet[3182]: E1101 00:25:08.172212 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6659dc5f84-8hw6r" podUID="7f37928f-30fa-48de-9724-092e451da4bf" Nov 1 00:25:08.682358 containerd[1985]: time="2025-11-01T00:25:08.682298349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:25:08.919646 containerd[1985]: time="2025-11-01T00:25:08.919590887Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:25:08.921739 containerd[1985]: time="2025-11-01T00:25:08.921666768Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:25:08.921941 containerd[1985]: time="2025-11-01T00:25:08.921788198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:25:08.922976 containerd[1985]: time="2025-11-01T00:25:08.922535241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:25:08.923043 kubelet[3182]: E1101 00:25:08.922098 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:25:08.923043 kubelet[3182]: E1101 00:25:08.922155 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:25:08.923043 kubelet[3182]: E1101 00:25:08.922343 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67d9f69bfb-kcfrc_calico-apiserver(29fc9071-7019-4315-907a-15289e1e3c38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:25:08.923043 kubelet[3182]: E1101 00:25:08.922388 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-kcfrc" podUID="29fc9071-7019-4315-907a-15289e1e3c38" Nov 1 00:25:09.172605 containerd[1985]: time="2025-11-01T00:25:09.172462818Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:25:09.188039 containerd[1985]: time="2025-11-01T00:25:09.187951451Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:25:09.188345 containerd[1985]: time="2025-11-01T00:25:09.188061148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:25:09.188447 kubelet[3182]: E1101 00:25:09.188391 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:25:09.188862 kubelet[3182]: E1101 00:25:09.188437 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:25:09.188862 kubelet[3182]: E1101 00:25:09.188570 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-85c56f6579-hjmzt_calico-system(3b1a064e-eaea-4078-a670-51fea2063bf7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:25:09.188862 kubelet[3182]: E1101 00:25:09.188606 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85c56f6579-hjmzt" podUID="3b1a064e-eaea-4078-a670-51fea2063bf7" Nov 1 00:25:13.681118 kubelet[3182]: E1101 00:25:13.681020 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-mczl8" podUID="a4244289-0ea7-4d4f-a667-210bd4cdc63c" Nov 1 00:25:13.842036 systemd[1]: cri-containerd-6a9aa37636714bcc433a60b05eeadaac0bc47986d42fdf5486af8f7aaf0716eb.scope: Deactivated successfully. Nov 1 00:25:13.842367 systemd[1]: cri-containerd-6a9aa37636714bcc433a60b05eeadaac0bc47986d42fdf5486af8f7aaf0716eb.scope: Consumed 12.372s CPU time. Nov 1 00:25:13.894010 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a9aa37636714bcc433a60b05eeadaac0bc47986d42fdf5486af8f7aaf0716eb-rootfs.mount: Deactivated successfully. Nov 1 00:25:13.936774 containerd[1985]: time="2025-11-01T00:25:13.926465886Z" level=info msg="shim disconnected" id=6a9aa37636714bcc433a60b05eeadaac0bc47986d42fdf5486af8f7aaf0716eb namespace=k8s.io Nov 1 00:25:13.962720 containerd[1985]: time="2025-11-01T00:25:13.962481070Z" level=warning msg="cleaning up after shim disconnected" id=6a9aa37636714bcc433a60b05eeadaac0bc47986d42fdf5486af8f7aaf0716eb namespace=k8s.io Nov 1 00:25:13.962720 containerd[1985]: time="2025-11-01T00:25:13.962528041Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:25:14.442353 systemd[1]: cri-containerd-17cda95fd9c02e74b96609b8c689db604fa5eb59ce99b1bfc0e297cca10cc616.scope: Deactivated successfully. Nov 1 00:25:14.444784 systemd[1]: cri-containerd-17cda95fd9c02e74b96609b8c689db604fa5eb59ce99b1bfc0e297cca10cc616.scope: Consumed 4.927s CPU time, 28.8M memory peak, 0B memory swap peak. Nov 1 00:25:14.471883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17cda95fd9c02e74b96609b8c689db604fa5eb59ce99b1bfc0e297cca10cc616-rootfs.mount: Deactivated successfully. Nov 1 00:25:14.484658 containerd[1985]: time="2025-11-01T00:25:14.484448952Z" level=info msg="shim disconnected" id=17cda95fd9c02e74b96609b8c689db604fa5eb59ce99b1bfc0e297cca10cc616 namespace=k8s.io Nov 1 00:25:14.484658 containerd[1985]: time="2025-11-01T00:25:14.484506083Z" level=warning msg="cleaning up after shim disconnected" id=17cda95fd9c02e74b96609b8c689db604fa5eb59ce99b1bfc0e297cca10cc616 namespace=k8s.io Nov 1 00:25:14.484658 containerd[1985]: time="2025-11-01T00:25:14.484518248Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:25:14.569153 kubelet[3182]: I1101 00:25:14.569098 3182 scope.go:117] "RemoveContainer" containerID="17cda95fd9c02e74b96609b8c689db604fa5eb59ce99b1bfc0e297cca10cc616" Nov 1 00:25:14.569513 kubelet[3182]: I1101 00:25:14.569469 3182 scope.go:117] "RemoveContainer" containerID="6a9aa37636714bcc433a60b05eeadaac0bc47986d42fdf5486af8f7aaf0716eb" Nov 1 00:25:14.589793 containerd[1985]: time="2025-11-01T00:25:14.589711429Z" level=info msg="CreateContainer within sandbox \"05e2b960c843cd6dddea84084b9935f25b5627674dd18f5b204f64d6c6ae6c69\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 1 00:25:14.589970 containerd[1985]: time="2025-11-01T00:25:14.589919824Z" level=info msg="CreateContainer within sandbox \"b353223fe2f774742102acd0e65f5e581acd1bd88ba3bf9b36dbb52c41acb910\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 1 00:25:14.644679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2602132571.mount: Deactivated successfully. Nov 1 00:25:14.668252 containerd[1985]: time="2025-11-01T00:25:14.668198634Z" level=info msg="CreateContainer within sandbox \"b353223fe2f774742102acd0e65f5e581acd1bd88ba3bf9b36dbb52c41acb910\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"12417fe42bcf4b963f717b44abd543b72f2f6ea78d3e210a8d5c0882596dd056\"" Nov 1 00:25:14.668791 containerd[1985]: time="2025-11-01T00:25:14.668759905Z" level=info msg="StartContainer for \"12417fe42bcf4b963f717b44abd543b72f2f6ea78d3e210a8d5c0882596dd056\"" Nov 1 00:25:14.671088 containerd[1985]: time="2025-11-01T00:25:14.671059189Z" level=info msg="CreateContainer within sandbox \"05e2b960c843cd6dddea84084b9935f25b5627674dd18f5b204f64d6c6ae6c69\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"af599e7dec21d1cdfbe79d87e1f502df37d73e4d1e8597711352a78fbac75d6f\"" Nov 1 00:25:14.671781 containerd[1985]: time="2025-11-01T00:25:14.671761137Z" level=info msg="StartContainer for \"af599e7dec21d1cdfbe79d87e1f502df37d73e4d1e8597711352a78fbac75d6f\"" Nov 1 00:25:14.722952 systemd[1]: Started cri-containerd-12417fe42bcf4b963f717b44abd543b72f2f6ea78d3e210a8d5c0882596dd056.scope - libcontainer container 12417fe42bcf4b963f717b44abd543b72f2f6ea78d3e210a8d5c0882596dd056. Nov 1 00:25:14.733956 systemd[1]: Started cri-containerd-af599e7dec21d1cdfbe79d87e1f502df37d73e4d1e8597711352a78fbac75d6f.scope - libcontainer container af599e7dec21d1cdfbe79d87e1f502df37d73e4d1e8597711352a78fbac75d6f. Nov 1 00:25:14.788139 containerd[1985]: time="2025-11-01T00:25:14.788098338Z" level=info msg="StartContainer for \"12417fe42bcf4b963f717b44abd543b72f2f6ea78d3e210a8d5c0882596dd056\" returns successfully" Nov 1 00:25:14.812039 containerd[1985]: time="2025-11-01T00:25:14.811907828Z" level=info msg="StartContainer for \"af599e7dec21d1cdfbe79d87e1f502df37d73e4d1e8597711352a78fbac75d6f\" returns successfully" Nov 1 00:25:16.683537 containerd[1985]: time="2025-11-01T00:25:16.683501803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:25:16.941196 containerd[1985]: time="2025-11-01T00:25:16.939541729Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:25:16.941718 containerd[1985]: time="2025-11-01T00:25:16.941667183Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:25:16.941829 containerd[1985]: time="2025-11-01T00:25:16.941693900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:25:16.942312 kubelet[3182]: E1101 00:25:16.942024 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:25:16.942312 kubelet[3182]: E1101 00:25:16.942079 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:25:16.943393 kubelet[3182]: E1101 00:25:16.942818 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-5cfdt_calico-system(9d66f695-3c82-4cb4-ac8a-5f7c10006e53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:25:16.943483 containerd[1985]: time="2025-11-01T00:25:16.942506761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:25:17.205601 containerd[1985]: time="2025-11-01T00:25:17.205391812Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:25:17.207684 containerd[1985]: time="2025-11-01T00:25:17.207611561Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:25:17.207817 containerd[1985]: time="2025-11-01T00:25:17.207701164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:25:17.207953 kubelet[3182]: E1101 00:25:17.207902 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:25:17.207953 kubelet[3182]: E1101 00:25:17.207944 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:25:17.208185 kubelet[3182]: E1101 00:25:17.208115 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-qq2mr_calico-system(3d0071e7-dbca-4b76-a432-c8b1bb561ab0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:25:17.208185 kubelet[3182]: E1101 00:25:17.208163 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qq2mr" podUID="3d0071e7-dbca-4b76-a432-c8b1bb561ab0" Nov 1 00:25:17.209028 containerd[1985]: time="2025-11-01T00:25:17.208848322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:25:17.441127 containerd[1985]: time="2025-11-01T00:25:17.441080858Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:25:17.443410 containerd[1985]: time="2025-11-01T00:25:17.443334824Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:25:17.443410 containerd[1985]: time="2025-11-01T00:25:17.443342076Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:25:17.443773 kubelet[3182]: E1101 00:25:17.443650 3182 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:25:17.443773 kubelet[3182]: E1101 00:25:17.443700 3182 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:25:17.443897 kubelet[3182]: E1101 00:25:17.443819 3182 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-5cfdt_calico-system(9d66f695-3c82-4cb4-ac8a-5f7c10006e53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:25:17.443993 kubelet[3182]: E1101 00:25:17.443877 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cfdt" podUID="9d66f695-3c82-4cb4-ac8a-5f7c10006e53" Nov 1 00:25:19.435550 systemd[1]: cri-containerd-d8eeb7e34c2d5c3d3bd42b8382c5ebbe0d15f3551992ec00ee5e90f420823754.scope: Deactivated successfully. Nov 1 00:25:19.436201 systemd[1]: cri-containerd-d8eeb7e34c2d5c3d3bd42b8382c5ebbe0d15f3551992ec00ee5e90f420823754.scope: Consumed 3.396s CPU time, 20.0M memory peak, 0B memory swap peak. Nov 1 00:25:19.466232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8eeb7e34c2d5c3d3bd42b8382c5ebbe0d15f3551992ec00ee5e90f420823754-rootfs.mount: Deactivated successfully. Nov 1 00:25:19.497678 containerd[1985]: time="2025-11-01T00:25:19.497608956Z" level=info msg="shim disconnected" id=d8eeb7e34c2d5c3d3bd42b8382c5ebbe0d15f3551992ec00ee5e90f420823754 namespace=k8s.io Nov 1 00:25:19.497678 containerd[1985]: time="2025-11-01T00:25:19.497663209Z" level=warning msg="cleaning up after shim disconnected" id=d8eeb7e34c2d5c3d3bd42b8382c5ebbe0d15f3551992ec00ee5e90f420823754 namespace=k8s.io Nov 1 00:25:19.497678 containerd[1985]: time="2025-11-01T00:25:19.497671967Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:25:19.588416 kubelet[3182]: I1101 00:25:19.588207 3182 scope.go:117] "RemoveContainer" containerID="d8eeb7e34c2d5c3d3bd42b8382c5ebbe0d15f3551992ec00ee5e90f420823754" Nov 1 00:25:19.591949 containerd[1985]: time="2025-11-01T00:25:19.591878197Z" level=info msg="CreateContainer within sandbox \"69bb8226b5a6a6e72553ef498390816edb6842845785d2267c361582bc8a905b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 1 00:25:19.614823 containerd[1985]: time="2025-11-01T00:25:19.614771079Z" level=info msg="CreateContainer within sandbox \"69bb8226b5a6a6e72553ef498390816edb6842845785d2267c361582bc8a905b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e1c7411524e9d330a16a88125fc45e9e1ab13e06c39116c6fd73003a3c912561\"" Nov 1 00:25:19.615438 containerd[1985]: time="2025-11-01T00:25:19.615404037Z" level=info msg="StartContainer for \"e1c7411524e9d330a16a88125fc45e9e1ab13e06c39116c6fd73003a3c912561\"" Nov 1 00:25:19.653935 systemd[1]: Started cri-containerd-e1c7411524e9d330a16a88125fc45e9e1ab13e06c39116c6fd73003a3c912561.scope - libcontainer container e1c7411524e9d330a16a88125fc45e9e1ab13e06c39116c6fd73003a3c912561. Nov 1 00:25:19.714981 containerd[1985]: time="2025-11-01T00:25:19.714861644Z" level=info msg="StartContainer for \"e1c7411524e9d330a16a88125fc45e9e1ab13e06c39116c6fd73003a3c912561\" returns successfully" Nov 1 00:25:20.683605 kubelet[3182]: E1101 00:25:20.683556 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-kcfrc" podUID="29fc9071-7019-4315-907a-15289e1e3c38" Nov 1 00:25:20.685383 kubelet[3182]: E1101 00:25:20.685061 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6659dc5f84-8hw6r" podUID="7f37928f-30fa-48de-9724-092e451da4bf" Nov 1 00:25:20.686204 kubelet[3182]: E1101 00:25:20.685690 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85c56f6579-hjmzt" podUID="3b1a064e-eaea-4078-a670-51fea2063bf7" Nov 1 00:25:23.124550 kubelet[3182]: E1101 00:25:23.124230 3182 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-30-202)" Nov 1 00:25:24.682089 kubelet[3182]: E1101 00:25:24.682011 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-mczl8" podUID="a4244289-0ea7-4d4f-a667-210bd4cdc63c" Nov 1 00:25:26.508310 systemd[1]: cri-containerd-12417fe42bcf4b963f717b44abd543b72f2f6ea78d3e210a8d5c0882596dd056.scope: Deactivated successfully. Nov 1 00:25:26.536060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12417fe42bcf4b963f717b44abd543b72f2f6ea78d3e210a8d5c0882596dd056-rootfs.mount: Deactivated successfully. Nov 1 00:25:26.558943 containerd[1985]: time="2025-11-01T00:25:26.558770392Z" level=info msg="shim disconnected" id=12417fe42bcf4b963f717b44abd543b72f2f6ea78d3e210a8d5c0882596dd056 namespace=k8s.io Nov 1 00:25:26.558943 containerd[1985]: time="2025-11-01T00:25:26.558833313Z" level=warning msg="cleaning up after shim disconnected" id=12417fe42bcf4b963f717b44abd543b72f2f6ea78d3e210a8d5c0882596dd056 namespace=k8s.io Nov 1 00:25:26.558943 containerd[1985]: time="2025-11-01T00:25:26.558842427Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:25:26.617036 kubelet[3182]: I1101 00:25:26.616990 3182 scope.go:117] "RemoveContainer" containerID="6a9aa37636714bcc433a60b05eeadaac0bc47986d42fdf5486af8f7aaf0716eb" Nov 1 00:25:26.617944 kubelet[3182]: I1101 00:25:26.617159 3182 scope.go:117] "RemoveContainer" containerID="12417fe42bcf4b963f717b44abd543b72f2f6ea78d3e210a8d5c0882596dd056" Nov 1 00:25:26.617944 kubelet[3182]: E1101 00:25:26.617396 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-65cdcdfd6d-2dsbk_tigera-operator(50f6d7ee-2b17-492d-a5e7-e634afeaf3d0)\"" pod="tigera-operator/tigera-operator-65cdcdfd6d-2dsbk" podUID="50f6d7ee-2b17-492d-a5e7-e634afeaf3d0" Nov 1 00:25:26.646303 containerd[1985]: time="2025-11-01T00:25:26.646215377Z" level=info msg="RemoveContainer for \"6a9aa37636714bcc433a60b05eeadaac0bc47986d42fdf5486af8f7aaf0716eb\"" Nov 1 00:25:26.652062 containerd[1985]: time="2025-11-01T00:25:26.652019625Z" level=info msg="RemoveContainer for \"6a9aa37636714bcc433a60b05eeadaac0bc47986d42fdf5486af8f7aaf0716eb\" returns successfully" Nov 1 00:25:29.681379 kubelet[3182]: E1101 00:25:29.681328 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qq2mr" podUID="3d0071e7-dbca-4b76-a432-c8b1bb561ab0" Nov 1 00:25:30.683182 kubelet[3182]: E1101 00:25:30.683068 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cfdt" podUID="9d66f695-3c82-4cb4-ac8a-5f7c10006e53" Nov 1 00:25:32.681401 kubelet[3182]: E1101 00:25:32.681250 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67d9f69bfb-kcfrc" podUID="29fc9071-7019-4315-907a-15289e1e3c38" Nov 1 00:25:33.131951 kubelet[3182]: E1101 00:25:33.131804 3182 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-202?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 1 00:25:33.682541 kubelet[3182]: E1101 00:25:33.682485 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6659dc5f84-8hw6r" podUID="7f37928f-30fa-48de-9724-092e451da4bf" Nov 1 00:25:34.681818 kubelet[3182]: E1101 00:25:34.681690 3182 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85c56f6579-hjmzt" podUID="3b1a064e-eaea-4078-a670-51fea2063bf7"