Nov 5 16:01:25.874580 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 13:45:21 -00 2025 Nov 5 16:01:25.874622 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 16:01:25.874641 kernel: BIOS-provided physical RAM map: Nov 5 16:01:25.874654 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 5 16:01:25.874666 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Nov 5 16:01:25.874679 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Nov 5 16:01:25.874696 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Nov 5 16:01:25.874709 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Nov 5 16:01:25.874723 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Nov 5 16:01:25.874736 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Nov 5 16:01:25.874753 kernel: NX (Execute Disable) protection: active Nov 5 16:01:25.874766 kernel: APIC: Static calls initialized Nov 5 16:01:25.874779 kernel: e820: update [mem 0x768bf018-0x768c7e57] usable ==> usable Nov 5 16:01:25.874793 kernel: extended physical RAM map: Nov 5 16:01:25.874811 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 5 16:01:25.874829 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768bf017] usable Nov 5 16:01:25.874844 kernel: reserve setup_data: [mem 0x00000000768bf018-0x00000000768c7e57] usable Nov 5 16:01:25.874859 kernel: reserve setup_data: [mem 0x00000000768c7e58-0x00000000786cdfff] usable Nov 5 16:01:25.874874 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Nov 5 16:01:25.874888 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Nov 5 16:01:25.874904 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Nov 5 16:01:25.874919 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Nov 5 16:01:25.874934 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Nov 5 16:01:25.874948 kernel: efi: EFI v2.7 by EDK II Nov 5 16:01:25.874966 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77002518 Nov 5 16:01:25.874980 kernel: secureboot: Secure boot disabled Nov 5 16:01:25.874995 kernel: SMBIOS 2.7 present. Nov 5 16:01:25.875010 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Nov 5 16:01:25.875025 kernel: DMI: Memory slots populated: 1/1 Nov 5 16:01:25.875039 kernel: Hypervisor detected: KVM Nov 5 16:01:25.875054 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Nov 5 16:01:25.875069 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 5 16:01:25.875085 kernel: kvm-clock: using sched offset of 6345562865 cycles Nov 5 16:01:25.875101 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 5 16:01:25.875117 kernel: tsc: Detected 2499.996 MHz processor Nov 5 16:01:25.875135 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 5 16:01:25.875150 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 5 16:01:25.875165 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Nov 5 16:01:25.875181 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 5 16:01:25.875196 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 5 16:01:25.875217 kernel: Using GB pages for direct mapping Nov 5 16:01:25.875235 kernel: ACPI: Early table checksum verification disabled Nov 5 16:01:25.875252 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Nov 5 16:01:25.875268 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Nov 5 16:01:25.875285 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 5 16:01:25.875301 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Nov 5 16:01:25.875320 kernel: ACPI: FACS 0x00000000789D0000 000040 Nov 5 16:01:25.875336 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Nov 5 16:01:25.875352 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 5 16:01:25.875369 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 5 16:01:25.875385 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Nov 5 16:01:25.875401 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Nov 5 16:01:25.875417 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 5 16:01:25.875452 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 5 16:01:25.875469 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Nov 5 16:01:25.875486 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Nov 5 16:01:25.875502 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Nov 5 16:01:25.875519 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Nov 5 16:01:25.875535 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Nov 5 16:01:25.875552 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Nov 5 16:01:25.875571 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Nov 5 16:01:25.875588 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Nov 5 16:01:25.875604 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Nov 5 16:01:25.875621 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Nov 5 16:01:25.875637 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Nov 5 16:01:25.875653 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Nov 5 16:01:25.875669 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Nov 5 16:01:25.875685 kernel: NUMA: Initialized distance table, cnt=1 Nov 5 16:01:25.875704 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Nov 5 16:01:25.875721 kernel: Zone ranges: Nov 5 16:01:25.875737 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 5 16:01:25.875752 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Nov 5 16:01:25.875768 kernel: Normal empty Nov 5 16:01:25.875784 kernel: Device empty Nov 5 16:01:25.875800 kernel: Movable zone start for each node Nov 5 16:01:25.875819 kernel: Early memory node ranges Nov 5 16:01:25.875835 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 5 16:01:25.875851 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Nov 5 16:01:25.875868 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Nov 5 16:01:25.875884 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Nov 5 16:01:25.875900 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 16:01:25.875917 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 5 16:01:25.875933 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 5 16:01:25.875953 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Nov 5 16:01:25.875970 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 5 16:01:25.875986 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 5 16:01:25.876003 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Nov 5 16:01:25.876019 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 5 16:01:25.876035 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 5 16:01:25.876051 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 5 16:01:25.876070 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 5 16:01:25.876087 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 5 16:01:25.876104 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 5 16:01:25.876121 kernel: TSC deadline timer available Nov 5 16:01:25.876137 kernel: CPU topo: Max. logical packages: 1 Nov 5 16:01:25.876153 kernel: CPU topo: Max. logical dies: 1 Nov 5 16:01:25.876170 kernel: CPU topo: Max. dies per package: 1 Nov 5 16:01:25.876185 kernel: CPU topo: Max. threads per core: 2 Nov 5 16:01:25.876204 kernel: CPU topo: Num. cores per package: 1 Nov 5 16:01:25.876220 kernel: CPU topo: Num. threads per package: 2 Nov 5 16:01:25.876237 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 5 16:01:25.876265 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 5 16:01:25.876282 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Nov 5 16:01:25.876298 kernel: Booting paravirtualized kernel on KVM Nov 5 16:01:25.876315 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 5 16:01:25.876334 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 5 16:01:25.876352 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 5 16:01:25.876369 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 5 16:01:25.876385 kernel: pcpu-alloc: [0] 0 1 Nov 5 16:01:25.876401 kernel: kvm-guest: PV spinlocks enabled Nov 5 16:01:25.876418 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 5 16:01:25.876448 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 16:01:25.876478 kernel: random: crng init done Nov 5 16:01:25.876494 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 16:01:25.876510 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 5 16:01:25.876524 kernel: Fallback order for Node 0: 0 Nov 5 16:01:25.876539 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Nov 5 16:01:25.876554 kernel: Policy zone: DMA32 Nov 5 16:01:25.876581 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 16:01:25.876599 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 5 16:01:25.876616 kernel: Kernel/User page tables isolation: enabled Nov 5 16:01:25.876635 kernel: ftrace: allocating 40092 entries in 157 pages Nov 5 16:01:25.876652 kernel: ftrace: allocated 157 pages with 5 groups Nov 5 16:01:25.876669 kernel: Dynamic Preempt: voluntary Nov 5 16:01:25.876686 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 16:01:25.876705 kernel: rcu: RCU event tracing is enabled. Nov 5 16:01:25.876722 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 5 16:01:25.876739 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 16:01:25.876761 kernel: Rude variant of Tasks RCU enabled. Nov 5 16:01:25.876778 kernel: Tracing variant of Tasks RCU enabled. Nov 5 16:01:25.876796 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 16:01:25.876812 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 5 16:01:25.876833 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 16:01:25.876851 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 16:01:25.876868 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 16:01:25.876886 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 5 16:01:25.876903 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 16:01:25.876920 kernel: Console: colour dummy device 80x25 Nov 5 16:01:25.876938 kernel: printk: legacy console [tty0] enabled Nov 5 16:01:25.876959 kernel: printk: legacy console [ttyS0] enabled Nov 5 16:01:25.876975 kernel: ACPI: Core revision 20240827 Nov 5 16:01:25.876994 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Nov 5 16:01:25.877012 kernel: APIC: Switch to symmetric I/O mode setup Nov 5 16:01:25.877029 kernel: x2apic enabled Nov 5 16:01:25.877047 kernel: APIC: Switched APIC routing to: physical x2apic Nov 5 16:01:25.877066 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Nov 5 16:01:25.877085 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Nov 5 16:01:25.877107 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 5 16:01:25.877125 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 5 16:01:25.877142 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 5 16:01:25.877159 kernel: Spectre V2 : Mitigation: Retpolines Nov 5 16:01:25.877177 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 5 16:01:25.877194 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 5 16:01:25.877211 kernel: RETBleed: Vulnerable Nov 5 16:01:25.877227 kernel: Speculative Store Bypass: Vulnerable Nov 5 16:01:25.877248 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Nov 5 16:01:25.877265 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 5 16:01:25.877282 kernel: GDS: Unknown: Dependent on hypervisor status Nov 5 16:01:25.877298 kernel: active return thunk: its_return_thunk Nov 5 16:01:25.877314 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 5 16:01:25.877332 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 5 16:01:25.877348 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 5 16:01:25.877365 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 5 16:01:25.877383 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 5 16:01:25.877400 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 5 16:01:25.877420 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 5 16:01:25.877455 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 5 16:01:25.877472 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 5 16:01:25.877489 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 5 16:01:25.877506 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 5 16:01:25.877522 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 5 16:01:25.877540 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 5 16:01:25.877556 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Nov 5 16:01:25.877573 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Nov 5 16:01:25.877590 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Nov 5 16:01:25.877606 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Nov 5 16:01:25.877627 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Nov 5 16:01:25.877643 kernel: Freeing SMP alternatives memory: 32K Nov 5 16:01:25.877659 kernel: pid_max: default: 32768 minimum: 301 Nov 5 16:01:25.877676 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 16:01:25.877693 kernel: landlock: Up and running. Nov 5 16:01:25.877708 kernel: SELinux: Initializing. Nov 5 16:01:25.877723 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 5 16:01:25.877738 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 5 16:01:25.877754 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 5 16:01:25.877770 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 5 16:01:25.877785 kernel: signal: max sigframe size: 3632 Nov 5 16:01:25.877806 kernel: rcu: Hierarchical SRCU implementation. Nov 5 16:01:25.877823 kernel: rcu: Max phase no-delay instances is 400. Nov 5 16:01:25.877839 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 16:01:25.877855 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 5 16:01:25.877872 kernel: smp: Bringing up secondary CPUs ... Nov 5 16:01:25.877888 kernel: smpboot: x86: Booting SMP configuration: Nov 5 16:01:25.877904 kernel: .... node #0, CPUs: #1 Nov 5 16:01:25.877925 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 5 16:01:25.877942 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 5 16:01:25.877958 kernel: smp: Brought up 1 node, 2 CPUs Nov 5 16:01:25.877975 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Nov 5 16:01:25.877993 kernel: Memory: 1930580K/2037804K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 102660K reserved, 0K cma-reserved) Nov 5 16:01:25.878010 kernel: devtmpfs: initialized Nov 5 16:01:25.878032 kernel: x86/mm: Memory block size: 128MB Nov 5 16:01:25.878050 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Nov 5 16:01:25.878068 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 16:01:25.878087 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 5 16:01:25.878105 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 16:01:25.878123 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 16:01:25.878142 kernel: audit: initializing netlink subsys (disabled) Nov 5 16:01:25.878164 kernel: audit: type=2000 audit(1762358483.543:1): state=initialized audit_enabled=0 res=1 Nov 5 16:01:25.878182 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 16:01:25.878200 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 5 16:01:25.878218 kernel: cpuidle: using governor menu Nov 5 16:01:25.878237 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 16:01:25.878255 kernel: dca service started, version 1.12.1 Nov 5 16:01:25.878274 kernel: PCI: Using configuration type 1 for base access Nov 5 16:01:25.878295 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 5 16:01:25.878313 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 16:01:25.878331 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 16:01:25.878349 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 16:01:25.878367 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 16:01:25.878386 kernel: ACPI: Added _OSI(Module Device) Nov 5 16:01:25.878403 kernel: ACPI: Added _OSI(Processor Device) Nov 5 16:01:25.878419 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 16:01:25.878457 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 5 16:01:25.878475 kernel: ACPI: Interpreter enabled Nov 5 16:01:25.878490 kernel: ACPI: PM: (supports S0 S5) Nov 5 16:01:25.878508 kernel: ACPI: Using IOAPIC for interrupt routing Nov 5 16:01:25.878526 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 5 16:01:25.878545 kernel: PCI: Using E820 reservations for host bridge windows Nov 5 16:01:25.878563 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 5 16:01:25.878584 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 16:01:25.878894 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 5 16:01:25.879094 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 5 16:01:25.879284 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 5 16:01:25.879306 kernel: acpiphp: Slot [3] registered Nov 5 16:01:25.879327 kernel: acpiphp: Slot [4] registered Nov 5 16:01:25.879346 kernel: acpiphp: Slot [5] registered Nov 5 16:01:25.879363 kernel: acpiphp: Slot [6] registered Nov 5 16:01:25.879381 kernel: acpiphp: Slot [7] registered Nov 5 16:01:25.879398 kernel: acpiphp: Slot [8] registered Nov 5 16:01:25.879416 kernel: acpiphp: Slot [9] registered Nov 5 16:01:25.879433 kernel: acpiphp: Slot [10] registered Nov 5 16:01:25.879474 kernel: acpiphp: Slot [11] registered Nov 5 16:01:25.879494 kernel: acpiphp: Slot [12] registered Nov 5 16:01:25.879511 kernel: acpiphp: Slot [13] registered Nov 5 16:01:25.879529 kernel: acpiphp: Slot [14] registered Nov 5 16:01:25.879547 kernel: acpiphp: Slot [15] registered Nov 5 16:01:25.879565 kernel: acpiphp: Slot [16] registered Nov 5 16:01:25.879583 kernel: acpiphp: Slot [17] registered Nov 5 16:01:25.879601 kernel: acpiphp: Slot [18] registered Nov 5 16:01:25.879621 kernel: acpiphp: Slot [19] registered Nov 5 16:01:25.879639 kernel: acpiphp: Slot [20] registered Nov 5 16:01:25.879656 kernel: acpiphp: Slot [21] registered Nov 5 16:01:25.879674 kernel: acpiphp: Slot [22] registered Nov 5 16:01:25.879691 kernel: acpiphp: Slot [23] registered Nov 5 16:01:25.879709 kernel: acpiphp: Slot [24] registered Nov 5 16:01:25.879727 kernel: acpiphp: Slot [25] registered Nov 5 16:01:25.879745 kernel: acpiphp: Slot [26] registered Nov 5 16:01:25.879765 kernel: acpiphp: Slot [27] registered Nov 5 16:01:25.879783 kernel: acpiphp: Slot [28] registered Nov 5 16:01:25.879801 kernel: acpiphp: Slot [29] registered Nov 5 16:01:25.879819 kernel: acpiphp: Slot [30] registered Nov 5 16:01:25.879836 kernel: acpiphp: Slot [31] registered Nov 5 16:01:25.879854 kernel: PCI host bridge to bus 0000:00 Nov 5 16:01:25.880055 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 5 16:01:25.880234 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 5 16:01:25.880419 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 5 16:01:25.880609 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 5 16:01:25.880795 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Nov 5 16:01:25.880956 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 16:01:25.881164 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Nov 5 16:01:25.881358 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Nov 5 16:01:25.881598 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Nov 5 16:01:25.881791 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 5 16:01:25.881974 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Nov 5 16:01:25.882163 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Nov 5 16:01:25.882356 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Nov 5 16:01:25.882565 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Nov 5 16:01:25.882756 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Nov 5 16:01:25.882944 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Nov 5 16:01:25.883141 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Nov 5 16:01:25.883336 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Nov 5 16:01:25.883548 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Nov 5 16:01:25.883740 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 5 16:01:25.883942 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Nov 5 16:01:25.884134 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Nov 5 16:01:25.884338 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Nov 5 16:01:25.884559 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Nov 5 16:01:25.884581 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 5 16:01:25.884597 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 5 16:01:25.884613 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 5 16:01:25.884629 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 5 16:01:25.884645 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 5 16:01:25.884665 kernel: iommu: Default domain type: Translated Nov 5 16:01:25.884681 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 5 16:01:25.884697 kernel: efivars: Registered efivars operations Nov 5 16:01:25.884714 kernel: PCI: Using ACPI for IRQ routing Nov 5 16:01:25.884730 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 5 16:01:25.884746 kernel: e820: reserve RAM buffer [mem 0x768bf018-0x77ffffff] Nov 5 16:01:25.884761 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Nov 5 16:01:25.884776 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Nov 5 16:01:25.884973 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Nov 5 16:01:25.885183 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Nov 5 16:01:25.885372 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 5 16:01:25.885395 kernel: vgaarb: loaded Nov 5 16:01:25.885413 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Nov 5 16:01:25.885431 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Nov 5 16:01:25.885476 kernel: clocksource: Switched to clocksource kvm-clock Nov 5 16:01:25.885497 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 16:01:25.885516 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 16:01:25.885533 kernel: pnp: PnP ACPI init Nov 5 16:01:25.885550 kernel: pnp: PnP ACPI: found 5 devices Nov 5 16:01:25.885570 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 5 16:01:25.885588 kernel: NET: Registered PF_INET protocol family Nov 5 16:01:25.885606 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 16:01:25.885627 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 5 16:01:25.885645 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 16:01:25.885663 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 5 16:01:25.885681 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 5 16:01:25.885698 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 5 16:01:25.885716 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 5 16:01:25.885734 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 5 16:01:25.885754 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 16:01:25.885772 kernel: NET: Registered PF_XDP protocol family Nov 5 16:01:25.885956 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 5 16:01:25.886177 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 5 16:01:25.886381 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 5 16:01:25.886579 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 5 16:01:25.886754 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Nov 5 16:01:25.886960 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 5 16:01:25.886982 kernel: PCI: CLS 0 bytes, default 64 Nov 5 16:01:25.887000 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 5 16:01:25.887018 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Nov 5 16:01:25.887036 kernel: clocksource: Switched to clocksource tsc Nov 5 16:01:25.887053 kernel: Initialise system trusted keyrings Nov 5 16:01:25.887071 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 5 16:01:25.887092 kernel: Key type asymmetric registered Nov 5 16:01:25.887109 kernel: Asymmetric key parser 'x509' registered Nov 5 16:01:25.887126 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 5 16:01:25.887145 kernel: io scheduler mq-deadline registered Nov 5 16:01:25.887162 kernel: io scheduler kyber registered Nov 5 16:01:25.887179 kernel: io scheduler bfq registered Nov 5 16:01:25.887197 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 5 16:01:25.887217 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 16:01:25.887234 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 5 16:01:25.887251 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 5 16:01:25.887269 kernel: i8042: Warning: Keylock active Nov 5 16:01:25.887286 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 5 16:01:25.887304 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 5 16:01:25.887535 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 5 16:01:25.887726 kernel: rtc_cmos 00:00: registered as rtc0 Nov 5 16:01:25.887907 kernel: rtc_cmos 00:00: setting system clock to 2025-11-05T16:01:22 UTC (1762358482) Nov 5 16:01:25.888087 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 5 16:01:25.888133 kernel: intel_pstate: CPU model not supported Nov 5 16:01:25.888153 kernel: efifb: probing for efifb Nov 5 16:01:25.888172 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Nov 5 16:01:25.888194 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Nov 5 16:01:25.888212 kernel: efifb: scrolling: redraw Nov 5 16:01:25.888231 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 5 16:01:25.888262 kernel: Console: switching to colour frame buffer device 100x37 Nov 5 16:01:25.888280 kernel: fb0: EFI VGA frame buffer device Nov 5 16:01:25.888299 kernel: pstore: Using crash dump compression: deflate Nov 5 16:01:25.888317 kernel: pstore: Registered efi_pstore as persistent store backend Nov 5 16:01:25.888338 kernel: NET: Registered PF_INET6 protocol family Nov 5 16:01:25.888357 kernel: Segment Routing with IPv6 Nov 5 16:01:25.888375 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 16:01:25.888393 kernel: NET: Registered PF_PACKET protocol family Nov 5 16:01:25.888411 kernel: Key type dns_resolver registered Nov 5 16:01:25.888429 kernel: IPI shorthand broadcast: enabled Nov 5 16:01:25.888461 kernel: sched_clock: Marking stable (826001709, 147055751)->(1047695557, -74638097) Nov 5 16:01:25.888483 kernel: registered taskstats version 1 Nov 5 16:01:25.888501 kernel: Loading compiled-in X.509 certificates Nov 5 16:01:25.888520 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 9f02cc8d588ce542f03b0da66dde47a90a145382' Nov 5 16:01:25.888538 kernel: Demotion targets for Node 0: null Nov 5 16:01:25.888556 kernel: Key type .fscrypt registered Nov 5 16:01:25.888575 kernel: Key type fscrypt-provisioning registered Nov 5 16:01:25.888593 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 16:01:25.888611 kernel: ima: Allocated hash algorithm: sha1 Nov 5 16:01:25.888632 kernel: ima: No architecture policies found Nov 5 16:01:25.888650 kernel: clk: Disabling unused clocks Nov 5 16:01:25.888669 kernel: Freeing unused kernel image (initmem) memory: 15964K Nov 5 16:01:25.888687 kernel: Write protecting the kernel read-only data: 40960k Nov 5 16:01:25.888711 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 5 16:01:25.888729 kernel: Run /init as init process Nov 5 16:01:25.888747 kernel: with arguments: Nov 5 16:01:25.888765 kernel: /init Nov 5 16:01:25.888783 kernel: with environment: Nov 5 16:01:25.888800 kernel: HOME=/ Nov 5 16:01:25.888818 kernel: TERM=linux Nov 5 16:01:25.888985 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 5 16:01:25.889012 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 5 16:01:25.889147 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 5 16:01:25.889171 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 16:01:25.889189 kernel: GPT:25804799 != 33554431 Nov 5 16:01:25.889208 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 16:01:25.889229 kernel: GPT:25804799 != 33554431 Nov 5 16:01:25.889247 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 16:01:25.889265 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 5 16:01:25.889283 kernel: SCSI subsystem initialized Nov 5 16:01:25.889302 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 16:01:25.889321 kernel: device-mapper: uevent: version 1.0.3 Nov 5 16:01:25.889339 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 16:01:25.889360 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 5 16:01:25.889379 kernel: raid6: avx512x4 gen() 18109 MB/s Nov 5 16:01:25.889397 kernel: raid6: avx512x2 gen() 18188 MB/s Nov 5 16:01:25.889415 kernel: raid6: avx512x1 gen() 18124 MB/s Nov 5 16:01:25.889433 kernel: raid6: avx2x4 gen() 17818 MB/s Nov 5 16:01:25.889473 kernel: raid6: avx2x2 gen() 18005 MB/s Nov 5 16:01:25.889491 kernel: raid6: avx2x1 gen() 13743 MB/s Nov 5 16:01:25.889513 kernel: raid6: using algorithm avx512x2 gen() 18188 MB/s Nov 5 16:01:25.889531 kernel: raid6: .... xor() 24105 MB/s, rmw enabled Nov 5 16:01:25.889549 kernel: raid6: using avx512x2 recovery algorithm Nov 5 16:01:25.889568 kernel: xor: automatically using best checksumming function avx Nov 5 16:01:25.889587 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 5 16:01:25.889605 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 16:01:25.889624 kernel: BTRFS: device fsid a4c7be9c-39f6-471d-8a4c-d50144c6bf01 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (152) Nov 5 16:01:25.889646 kernel: BTRFS info (device dm-0): first mount of filesystem a4c7be9c-39f6-471d-8a4c-d50144c6bf01 Nov 5 16:01:25.889664 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 5 16:01:25.889683 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 5 16:01:25.889701 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 16:01:25.889719 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 16:01:25.889738 kernel: loop: module loaded Nov 5 16:01:25.889756 kernel: loop0: detected capacity change from 0 to 100120 Nov 5 16:01:25.889777 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 16:01:25.889798 systemd[1]: Successfully made /usr/ read-only. Nov 5 16:01:25.889822 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 16:01:25.889841 systemd[1]: Detected virtualization amazon. Nov 5 16:01:25.889860 systemd[1]: Detected architecture x86-64. Nov 5 16:01:25.889878 systemd[1]: Running in initrd. Nov 5 16:01:25.889900 systemd[1]: No hostname configured, using default hostname. Nov 5 16:01:25.889919 systemd[1]: Hostname set to . Nov 5 16:01:25.889938 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 16:01:25.889957 systemd[1]: Queued start job for default target initrd.target. Nov 5 16:01:25.889976 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 16:01:25.889995 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 16:01:25.890016 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 16:01:25.890037 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 16:01:25.890055 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 16:01:25.890076 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 16:01:25.890096 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 16:01:25.890116 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 16:01:25.890138 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 16:01:25.890157 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 16:01:25.890176 systemd[1]: Reached target paths.target - Path Units. Nov 5 16:01:25.890195 systemd[1]: Reached target slices.target - Slice Units. Nov 5 16:01:25.890215 systemd[1]: Reached target swap.target - Swaps. Nov 5 16:01:25.890233 systemd[1]: Reached target timers.target - Timer Units. Nov 5 16:01:25.890252 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 16:01:25.890274 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 16:01:25.890293 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 16:01:25.890312 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 16:01:25.890331 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 16:01:25.890350 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 16:01:25.890369 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 16:01:25.890388 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 16:01:25.890410 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 16:01:25.890430 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 16:01:25.890462 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 16:01:25.890481 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 16:01:25.890501 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 16:01:25.890520 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 16:01:25.890543 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 16:01:25.890562 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 16:01:25.890581 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 16:01:25.890602 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 16:01:25.890625 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 16:01:25.890645 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 16:01:25.890664 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 16:01:25.890715 systemd-journald[286]: Collecting audit messages is disabled. Nov 5 16:01:25.890758 systemd-journald[286]: Journal started Nov 5 16:01:25.890795 systemd-journald[286]: Runtime Journal (/run/log/journal/ec26ef6f20e58c65d6dce153a7f76053) is 4.7M, max 38.1M, 33.3M free. Nov 5 16:01:25.895468 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 16:01:25.901644 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 16:01:25.910168 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 16:01:25.920680 kernel: Bridge firewalling registered Nov 5 16:01:25.919883 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 16:01:25.920423 systemd-modules-load[289]: Inserted module 'br_netfilter' Nov 5 16:01:25.923363 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 16:01:25.929888 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 16:01:25.933677 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 16:01:25.947960 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 16:01:25.962999 systemd-tmpfiles[301]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 16:01:25.971838 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 16:01:25.980615 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 16:01:25.984975 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 16:01:25.990302 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:01:25.993644 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 16:01:26.026614 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 16:01:26.030993 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 16:01:26.108149 dracut-cmdline[329]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 16:01:26.109086 systemd-resolved[315]: Positive Trust Anchors: Nov 5 16:01:26.109097 systemd-resolved[315]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 16:01:26.109101 systemd-resolved[315]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 16:01:26.109139 systemd-resolved[315]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 16:01:26.145305 systemd-resolved[315]: Defaulting to hostname 'linux'. Nov 5 16:01:26.146876 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 16:01:26.147740 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 16:01:26.304467 kernel: Loading iSCSI transport class v2.0-870. Nov 5 16:01:26.419478 kernel: iscsi: registered transport (tcp) Nov 5 16:01:26.443650 kernel: iscsi: registered transport (qla4xxx) Nov 5 16:01:26.443737 kernel: QLogic iSCSI HBA Driver Nov 5 16:01:26.473654 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 16:01:26.491234 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 16:01:26.492825 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 16:01:26.541023 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 16:01:26.543285 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 16:01:26.546612 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 16:01:26.586784 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 16:01:26.590706 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 16:01:26.635555 systemd-udevd[571]: Using default interface naming scheme 'v257'. Nov 5 16:01:26.653976 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 16:01:26.659750 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 16:01:26.674706 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 16:01:26.679689 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 16:01:26.692397 dracut-pre-trigger[662]: rd.md=0: removing MD RAID activation Nov 5 16:01:26.735142 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 16:01:26.739666 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 16:01:26.746619 systemd-networkd[669]: lo: Link UP Nov 5 16:01:26.746630 systemd-networkd[669]: lo: Gained carrier Nov 5 16:01:26.747619 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 16:01:26.748513 systemd[1]: Reached target network.target - Network. Nov 5 16:01:26.818817 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 16:01:26.823526 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 16:01:26.941811 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 16:01:26.942124 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:01:26.943128 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 16:01:26.949785 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 16:01:26.952806 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 5 16:01:26.953155 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 5 16:01:26.960765 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Nov 5 16:01:26.966466 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:4d:bf:04:ab:21 Nov 5 16:01:26.968180 (udev-worker)[715]: Network interface NamePolicy= disabled on kernel command line. Nov 5 16:01:26.986175 systemd-networkd[669]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 16:01:26.986192 systemd-networkd[669]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 16:01:26.993288 systemd-networkd[669]: eth0: Link UP Nov 5 16:01:26.993647 systemd-networkd[669]: eth0: Gained carrier Nov 5 16:01:26.993668 systemd-networkd[669]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 16:01:27.003538 systemd-networkd[669]: eth0: DHCPv4 address 172.31.16.11/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 5 16:01:27.013300 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:01:27.043462 kernel: cryptd: max_cpu_qlen set to 1000 Nov 5 16:01:27.089510 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Nov 5 16:01:27.156482 kernel: AES CTR mode by8 optimization enabled Nov 5 16:01:27.156550 kernel: nvme nvme0: using unchecked data buffer Nov 5 16:01:27.257959 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 5 16:01:27.263662 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 16:01:27.286097 disk-uuid[834]: Primary Header is updated. Nov 5 16:01:27.286097 disk-uuid[834]: Secondary Entries is updated. Nov 5 16:01:27.286097 disk-uuid[834]: Secondary Header is updated. Nov 5 16:01:27.376259 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 5 16:01:27.409510 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 5 16:01:27.466635 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 5 16:01:27.565124 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 16:01:27.570415 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 16:01:27.571031 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 16:01:27.572393 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 16:01:27.574912 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 16:01:27.602475 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 16:01:28.281633 systemd-networkd[669]: eth0: Gained IPv6LL Nov 5 16:01:28.428850 disk-uuid[835]: Warning: The kernel is still using the old partition table. Nov 5 16:01:28.428850 disk-uuid[835]: The new table will be used at the next reboot or after you Nov 5 16:01:28.428850 disk-uuid[835]: run partprobe(8) or kpartx(8) Nov 5 16:01:28.428850 disk-uuid[835]: The operation has completed successfully. Nov 5 16:01:28.438599 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 16:01:28.438743 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 16:01:28.440970 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 16:01:28.484496 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (995) Nov 5 16:01:28.488505 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 16:01:28.488579 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 5 16:01:28.535262 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 5 16:01:28.535351 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 5 16:01:28.544462 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 16:01:28.545152 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 16:01:28.547074 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 16:01:29.800532 ignition[1014]: Ignition 2.22.0 Nov 5 16:01:29.800548 ignition[1014]: Stage: fetch-offline Nov 5 16:01:29.801273 ignition[1014]: no configs at "/usr/lib/ignition/base.d" Nov 5 16:01:29.801298 ignition[1014]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 16:01:29.801702 ignition[1014]: Ignition finished successfully Nov 5 16:01:29.803578 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 16:01:29.806066 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 5 16:01:29.838621 ignition[1020]: Ignition 2.22.0 Nov 5 16:01:29.838637 ignition[1020]: Stage: fetch Nov 5 16:01:29.839047 ignition[1020]: no configs at "/usr/lib/ignition/base.d" Nov 5 16:01:29.839060 ignition[1020]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 16:01:29.839185 ignition[1020]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 16:01:29.859202 ignition[1020]: PUT result: OK Nov 5 16:01:29.862133 ignition[1020]: parsed url from cmdline: "" Nov 5 16:01:29.862147 ignition[1020]: no config URL provided Nov 5 16:01:29.862157 ignition[1020]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 16:01:29.862173 ignition[1020]: no config at "/usr/lib/ignition/user.ign" Nov 5 16:01:29.862497 ignition[1020]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 16:01:29.864222 ignition[1020]: PUT result: OK Nov 5 16:01:29.864405 ignition[1020]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 5 16:01:29.865325 ignition[1020]: GET result: OK Nov 5 16:01:29.865429 ignition[1020]: parsing config with SHA512: eaf7025ba0cd07dbf044cb6339887b9a407265d699da5d15cfc7d45ddc2ca310e18f965414806b03e60c0c7f5913ea15e118b00471932d3585aa6fc711210016 Nov 5 16:01:29.871757 unknown[1020]: fetched base config from "system" Nov 5 16:01:29.871781 unknown[1020]: fetched base config from "system" Nov 5 16:01:29.872733 ignition[1020]: fetch: fetch complete Nov 5 16:01:29.871790 unknown[1020]: fetched user config from "aws" Nov 5 16:01:29.872740 ignition[1020]: fetch: fetch passed Nov 5 16:01:29.876200 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 5 16:01:29.872808 ignition[1020]: Ignition finished successfully Nov 5 16:01:29.878857 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 16:01:29.917006 ignition[1027]: Ignition 2.22.0 Nov 5 16:01:29.917023 ignition[1027]: Stage: kargs Nov 5 16:01:29.917424 ignition[1027]: no configs at "/usr/lib/ignition/base.d" Nov 5 16:01:29.917454 ignition[1027]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 16:01:29.917568 ignition[1027]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 16:01:29.919263 ignition[1027]: PUT result: OK Nov 5 16:01:29.921859 ignition[1027]: kargs: kargs passed Nov 5 16:01:29.921933 ignition[1027]: Ignition finished successfully Nov 5 16:01:29.924104 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 16:01:29.925850 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 16:01:29.954214 ignition[1034]: Ignition 2.22.0 Nov 5 16:01:29.954231 ignition[1034]: Stage: disks Nov 5 16:01:29.954658 ignition[1034]: no configs at "/usr/lib/ignition/base.d" Nov 5 16:01:29.954670 ignition[1034]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 16:01:29.954790 ignition[1034]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 16:01:29.955791 ignition[1034]: PUT result: OK Nov 5 16:01:29.958308 ignition[1034]: disks: disks passed Nov 5 16:01:29.958387 ignition[1034]: Ignition finished successfully Nov 5 16:01:29.960515 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 16:01:29.961476 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 16:01:29.961822 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 16:01:29.962357 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 16:01:29.962944 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 16:01:29.963501 systemd[1]: Reached target basic.target - Basic System. Nov 5 16:01:29.965403 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 16:01:30.071258 systemd-fsck[1043]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Nov 5 16:01:30.074354 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 16:01:30.077013 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 16:01:30.315481 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f3db699e-c9e0-4f6b-8c2b-aa40a78cd116 r/w with ordered data mode. Quota mode: none. Nov 5 16:01:30.315405 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 16:01:30.316574 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 16:01:30.365296 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 16:01:30.368082 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 16:01:30.371028 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 16:01:30.371720 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 16:01:30.371765 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 16:01:30.380419 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 16:01:30.382921 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 16:01:30.395557 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1062) Nov 5 16:01:30.400492 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 16:01:30.400580 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 5 16:01:30.408456 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 5 16:01:30.408544 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 5 16:01:30.410792 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 16:01:31.520047 initrd-setup-root[1086]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 16:01:31.552697 initrd-setup-root[1093]: cut: /sysroot/etc/group: No such file or directory Nov 5 16:01:31.557643 initrd-setup-root[1100]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 16:01:31.562052 initrd-setup-root[1107]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 16:01:32.328690 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 16:01:32.331016 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 16:01:32.333641 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 16:01:32.354149 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 16:01:32.356519 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 16:01:32.385005 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 16:01:32.393481 ignition[1175]: INFO : Ignition 2.22.0 Nov 5 16:01:32.393481 ignition[1175]: INFO : Stage: mount Nov 5 16:01:32.395023 ignition[1175]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 16:01:32.395023 ignition[1175]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 16:01:32.395023 ignition[1175]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 16:01:32.396928 ignition[1175]: INFO : PUT result: OK Nov 5 16:01:32.398566 ignition[1175]: INFO : mount: mount passed Nov 5 16:01:32.399828 ignition[1175]: INFO : Ignition finished successfully Nov 5 16:01:32.401126 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 16:01:32.402809 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 16:01:32.426388 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 16:01:32.455460 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1186) Nov 5 16:01:32.458602 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 16:01:32.458670 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 5 16:01:32.466718 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 5 16:01:32.466791 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 5 16:01:32.469759 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 16:01:32.502154 ignition[1202]: INFO : Ignition 2.22.0 Nov 5 16:01:32.502154 ignition[1202]: INFO : Stage: files Nov 5 16:01:32.503718 ignition[1202]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 16:01:32.503718 ignition[1202]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 16:01:32.503718 ignition[1202]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 16:01:32.505088 ignition[1202]: INFO : PUT result: OK Nov 5 16:01:32.507420 ignition[1202]: DEBUG : files: compiled without relabeling support, skipping Nov 5 16:01:32.510126 ignition[1202]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 16:01:32.510126 ignition[1202]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 16:01:32.516961 ignition[1202]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 16:01:32.517893 ignition[1202]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 16:01:32.519013 unknown[1202]: wrote ssh authorized keys file for user: core Nov 5 16:01:32.519668 ignition[1202]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 16:01:32.555021 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 16:01:32.556182 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 5 16:01:32.618906 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 16:01:32.780748 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 16:01:32.780748 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 16:01:32.783008 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 16:01:32.783008 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 16:01:32.783008 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 16:01:32.783008 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 16:01:32.783008 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 16:01:32.783008 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 16:01:32.783008 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 16:01:32.788052 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 16:01:32.788052 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 16:01:32.788052 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 5 16:01:32.790610 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 5 16:01:32.790610 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 5 16:01:32.790610 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 5 16:01:33.248756 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 16:01:34.241564 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 5 16:01:34.241564 ignition[1202]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 16:01:34.244694 ignition[1202]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 16:01:34.247544 ignition[1202]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 16:01:34.247544 ignition[1202]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 16:01:34.247544 ignition[1202]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 5 16:01:34.253065 ignition[1202]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 16:01:34.253065 ignition[1202]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 16:01:34.253065 ignition[1202]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 16:01:34.253065 ignition[1202]: INFO : files: files passed Nov 5 16:01:34.253065 ignition[1202]: INFO : Ignition finished successfully Nov 5 16:01:34.251481 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 16:01:34.254737 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 16:01:34.269696 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 16:01:34.283169 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 16:01:34.283332 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 16:01:34.294121 initrd-setup-root-after-ignition[1234]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 16:01:34.294121 initrd-setup-root-after-ignition[1234]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 16:01:34.297490 initrd-setup-root-after-ignition[1238]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 16:01:34.297880 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 16:01:34.300021 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 16:01:34.301951 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 16:01:34.366471 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 16:01:34.366625 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 16:01:34.367893 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 16:01:34.369216 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 16:01:34.370478 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 16:01:34.371770 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 16:01:34.402066 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 16:01:34.404578 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 16:01:34.431502 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 16:01:34.431767 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 16:01:34.432684 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 16:01:34.433650 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 16:01:34.434546 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 16:01:34.434795 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 16:01:34.435914 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 16:01:34.437069 systemd[1]: Stopped target basic.target - Basic System. Nov 5 16:01:34.437866 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 16:01:34.438805 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 16:01:34.439846 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 16:01:34.440811 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 16:01:34.441574 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 16:01:34.442368 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 16:01:34.443204 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 16:01:34.444028 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 16:01:34.445557 systemd[1]: Stopped target swap.target - Swaps. Nov 5 16:01:34.446292 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 16:01:34.446574 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 16:01:34.447617 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 16:01:34.448563 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 16:01:34.449281 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 16:01:34.449502 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 16:01:34.450118 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 16:01:34.450347 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 16:01:34.451390 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 16:01:34.451612 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 16:01:34.453068 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 16:01:34.453287 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 16:01:34.456550 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 16:01:34.458357 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 16:01:34.460553 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 16:01:34.461507 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 16:01:34.464686 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 16:01:34.465649 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 16:01:34.467156 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 16:01:34.467992 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 16:01:34.475333 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 16:01:34.476470 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 16:01:34.501911 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 16:01:34.503236 ignition[1258]: INFO : Ignition 2.22.0 Nov 5 16:01:34.503236 ignition[1258]: INFO : Stage: umount Nov 5 16:01:34.506560 ignition[1258]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 16:01:34.506560 ignition[1258]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 16:01:34.506560 ignition[1258]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 16:01:34.506560 ignition[1258]: INFO : PUT result: OK Nov 5 16:01:34.509497 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 16:01:34.511314 ignition[1258]: INFO : umount: umount passed Nov 5 16:01:34.511314 ignition[1258]: INFO : Ignition finished successfully Nov 5 16:01:34.509657 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 16:01:34.512033 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 16:01:34.512182 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 16:01:34.513649 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 16:01:34.513775 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 16:01:34.514530 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 16:01:34.514598 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 16:01:34.515135 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 5 16:01:34.515201 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 5 16:01:34.515796 systemd[1]: Stopped target network.target - Network. Nov 5 16:01:34.516554 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 16:01:34.516621 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 16:01:34.517183 systemd[1]: Stopped target paths.target - Path Units. Nov 5 16:01:34.517786 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 16:01:34.517865 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 16:01:34.518419 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 16:01:34.519075 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 16:01:34.519732 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 16:01:34.519794 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 16:01:34.520542 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 16:01:34.520593 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 16:01:34.521157 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 16:01:34.521240 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 16:01:34.521828 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 16:01:34.521895 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 16:01:34.522460 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 16:01:34.522529 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 16:01:34.523651 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 16:01:34.524429 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 16:01:34.532919 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 16:01:34.533070 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 16:01:34.535294 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 16:01:34.535610 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 16:01:34.539051 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 16:01:34.539608 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 16:01:34.539665 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 16:01:34.541675 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 16:01:34.542807 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 16:01:34.542893 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 16:01:34.548153 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 16:01:34.548412 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 16:01:34.549190 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 16:01:34.549263 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 16:01:34.549893 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 16:01:34.567256 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 16:01:34.567450 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 16:01:34.572679 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 16:01:34.572773 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 16:01:34.574821 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 16:01:34.575525 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 16:01:34.576712 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 16:01:34.576802 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 16:01:34.578266 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 16:01:34.578346 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 16:01:34.579420 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 16:01:34.579555 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 16:01:34.583656 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 16:01:34.584482 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 16:01:34.584576 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 16:01:34.587627 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 16:01:34.587726 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 16:01:34.590557 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 5 16:01:34.590647 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 16:01:34.592191 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 16:01:34.592378 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 16:01:34.593518 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 16:01:34.593595 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:01:34.595199 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 16:01:34.597553 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 16:01:34.604691 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 16:01:34.604825 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 16:01:34.606713 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 16:01:34.609493 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 16:01:34.631092 systemd[1]: Switching root. Nov 5 16:01:34.733248 systemd-journald[286]: Journal stopped Nov 5 16:01:38.533653 systemd-journald[286]: Received SIGTERM from PID 1 (systemd). Nov 5 16:01:38.533733 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 16:01:38.533766 kernel: SELinux: policy capability open_perms=1 Nov 5 16:01:38.533790 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 16:01:38.533810 kernel: SELinux: policy capability always_check_network=0 Nov 5 16:01:38.533832 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 16:01:38.533851 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 16:01:38.533871 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 16:01:38.533889 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 16:01:38.533909 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 16:01:38.533929 kernel: audit: type=1403 audit(1762358495.560:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 16:01:38.533949 systemd[1]: Successfully loaded SELinux policy in 130.749ms. Nov 5 16:01:38.533987 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.713ms. Nov 5 16:01:38.534015 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 16:01:38.534036 systemd[1]: Detected virtualization amazon. Nov 5 16:01:38.534056 systemd[1]: Detected architecture x86-64. Nov 5 16:01:38.534077 systemd[1]: Detected first boot. Nov 5 16:01:38.534100 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 16:01:38.534122 zram_generator::config[1303]: No configuration found. Nov 5 16:01:38.534147 kernel: Guest personality initialized and is inactive Nov 5 16:01:38.534167 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 5 16:01:38.534185 kernel: Initialized host personality Nov 5 16:01:38.534210 kernel: NET: Registered PF_VSOCK protocol family Nov 5 16:01:38.534232 systemd[1]: Populated /etc with preset unit settings. Nov 5 16:01:38.534255 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 16:01:38.534279 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 16:01:38.534301 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 16:01:38.534321 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 16:01:38.534341 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 16:01:38.534360 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 16:01:38.534379 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 16:01:38.534401 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 16:01:38.534427 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 16:01:38.534476 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 16:01:38.534500 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 16:01:38.534523 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 16:01:38.534544 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 16:01:38.534566 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 16:01:38.534587 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 16:01:38.534612 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 16:01:38.534633 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 16:01:38.534654 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 16:01:38.534674 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 16:01:38.534694 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 16:01:38.534714 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 16:01:38.534739 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 16:01:38.534759 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 16:01:38.534780 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 16:01:38.534802 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 16:01:38.534825 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 16:01:38.534845 systemd[1]: Reached target slices.target - Slice Units. Nov 5 16:01:38.534865 systemd[1]: Reached target swap.target - Swaps. Nov 5 16:01:38.534887 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 16:01:38.534916 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 16:01:38.534935 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 16:01:38.534953 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 16:01:38.534973 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 16:01:38.534991 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 16:01:38.535011 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 16:01:38.535030 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 16:01:38.535054 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 16:01:38.535074 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 16:01:38.535093 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:01:38.535111 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 16:01:38.535140 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 16:01:38.535163 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 16:01:38.535185 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 16:01:38.535214 systemd[1]: Reached target machines.target - Containers. Nov 5 16:01:38.535237 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 16:01:38.535258 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 16:01:38.535280 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 16:01:38.535302 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 16:01:38.535324 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 16:01:38.535349 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 16:01:38.535369 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 16:01:38.535389 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 16:01:38.535409 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 16:01:38.535431 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 16:01:38.535472 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 16:01:38.535494 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 16:01:38.535521 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 16:01:38.535541 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 16:01:38.535563 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 16:01:38.535585 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 16:01:38.535606 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 16:01:38.535626 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 16:01:38.535647 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 16:01:38.536592 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 16:01:38.536633 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 16:01:38.536657 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:01:38.536676 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 16:01:38.536700 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 16:01:38.536722 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 16:01:38.536741 kernel: fuse: init (API version 7.41) Nov 5 16:01:38.536763 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 16:01:38.536783 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 16:01:38.536803 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 16:01:38.536827 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 16:01:38.536850 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 16:01:38.536870 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 16:01:38.536890 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 16:01:38.536910 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 16:01:38.536933 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 16:01:38.536954 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 16:01:38.536974 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 16:01:38.537000 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 16:01:38.537023 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 16:01:38.537046 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 16:01:38.537067 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 16:01:38.537091 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 16:01:38.537114 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 16:01:38.537135 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 16:01:38.537158 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 16:01:38.537181 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 16:01:38.537204 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 16:01:38.537226 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 16:01:38.537297 systemd-journald[1379]: Collecting audit messages is disabled. Nov 5 16:01:38.537343 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 16:01:38.537368 systemd-journald[1379]: Journal started Nov 5 16:01:38.537412 systemd-journald[1379]: Runtime Journal (/run/log/journal/ec26ef6f20e58c65d6dce153a7f76053) is 4.7M, max 38.1M, 33.3M free. Nov 5 16:01:38.101577 systemd[1]: Queued start job for default target multi-user.target. Nov 5 16:01:38.119948 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 5 16:01:38.120729 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 16:01:38.540459 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 16:01:38.546484 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 16:01:38.556520 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 16:01:38.563474 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 16:01:38.565394 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 16:01:38.578518 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 16:01:38.587550 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 16:01:38.594466 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 16:01:38.601007 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 16:01:38.603753 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 16:01:38.606361 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 16:01:38.610515 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 16:01:38.612136 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 16:01:38.619125 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 16:01:38.639187 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 16:01:38.640575 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 16:01:38.644681 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 16:01:38.648620 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 16:01:38.683278 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 16:01:38.697337 systemd-journald[1379]: Time spent on flushing to /var/log/journal/ec26ef6f20e58c65d6dce153a7f76053 is 92.896ms for 1001 entries. Nov 5 16:01:38.697337 systemd-journald[1379]: System Journal (/var/log/journal/ec26ef6f20e58c65d6dce153a7f76053) is 8M, max 588.1M, 580.1M free. Nov 5 16:01:38.803427 systemd-journald[1379]: Received client request to flush runtime journal. Nov 5 16:01:38.803515 kernel: loop1: detected capacity change from 0 to 128048 Nov 5 16:01:38.803542 kernel: ACPI: bus type drm_connector registered Nov 5 16:01:38.723117 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 16:01:38.740555 systemd-tmpfiles[1419]: ACLs are not supported, ignoring. Nov 5 16:01:38.740578 systemd-tmpfiles[1419]: ACLs are not supported, ignoring. Nov 5 16:01:38.749526 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 16:01:38.759896 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 16:01:38.766402 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 16:01:38.806180 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 16:01:38.807333 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 16:01:38.807612 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 16:01:38.850862 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 16:01:38.855643 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 16:01:38.858688 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 16:01:38.888892 systemd-tmpfiles[1456]: ACLs are not supported, ignoring. Nov 5 16:01:38.888920 systemd-tmpfiles[1456]: ACLs are not supported, ignoring. Nov 5 16:01:38.894308 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 16:01:38.935676 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 16:01:38.989396 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 16:01:39.121502 systemd-resolved[1455]: Positive Trust Anchors: Nov 5 16:01:39.121527 systemd-resolved[1455]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 16:01:39.121534 systemd-resolved[1455]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 16:01:39.121588 systemd-resolved[1455]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 16:01:39.123066 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 16:01:39.128617 systemd-resolved[1455]: Defaulting to hostname 'linux'. Nov 5 16:01:39.130479 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 16:01:39.131370 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 16:01:39.154472 kernel: loop2: detected capacity change from 0 to 110984 Nov 5 16:01:39.460158 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 16:01:39.463061 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 16:01:39.496477 kernel: loop3: detected capacity change from 0 to 72360 Nov 5 16:01:39.504171 systemd-udevd[1468]: Using default interface naming scheme 'v257'. Nov 5 16:01:39.711968 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 16:01:39.715329 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 16:01:39.788178 (udev-worker)[1474]: Network interface NamePolicy= disabled on kernel command line. Nov 5 16:01:39.812552 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 16:01:39.822114 kernel: loop4: detected capacity change from 0 to 219144 Nov 5 16:01:39.828661 systemd-networkd[1473]: lo: Link UP Nov 5 16:01:39.830088 systemd-networkd[1473]: lo: Gained carrier Nov 5 16:01:39.833083 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 16:01:39.834615 systemd[1]: Reached target network.target - Network. Nov 5 16:01:39.838949 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 16:01:39.843596 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 16:01:39.885537 systemd-networkd[1473]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 16:01:39.885551 systemd-networkd[1473]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 16:01:39.890294 systemd-networkd[1473]: eth0: Link UP Nov 5 16:01:39.890519 systemd-networkd[1473]: eth0: Gained carrier Nov 5 16:01:39.890555 systemd-networkd[1473]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 16:01:39.899532 systemd-networkd[1473]: eth0: DHCPv4 address 172.31.16.11/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 5 16:01:39.912481 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 5 16:01:39.920073 kernel: ACPI: button: Power Button [PWRF] Nov 5 16:01:39.920201 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Nov 5 16:01:39.925469 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Nov 5 16:01:39.925887 kernel: ACPI: button: Sleep Button [SLPF] Nov 5 16:01:39.957483 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 16:01:40.004022 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 16:01:40.086461 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 16:01:40.103660 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 16:01:40.103923 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:01:40.119030 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 16:01:40.122995 kernel: loop5: detected capacity change from 0 to 128048 Nov 5 16:01:40.146472 kernel: loop6: detected capacity change from 0 to 110984 Nov 5 16:01:40.172517 kernel: loop7: detected capacity change from 0 to 72360 Nov 5 16:01:40.200524 kernel: loop1: detected capacity change from 0 to 219144 Nov 5 16:01:40.228489 (sd-merge)[1517]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Nov 5 16:01:40.232592 (sd-merge)[1517]: Merged extensions into '/usr'. Nov 5 16:01:40.237136 systemd[1]: Reload requested from client PID 1418 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 16:01:40.237158 systemd[1]: Reloading... Nov 5 16:01:40.309477 zram_generator::config[1554]: No configuration found. Nov 5 16:01:40.677915 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 5 16:01:40.678982 systemd[1]: Reloading finished in 441 ms. Nov 5 16:01:40.707303 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 16:01:40.708225 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 16:01:40.747897 systemd[1]: Starting ensure-sysext.service... Nov 5 16:01:40.750604 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 16:01:40.753268 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 16:01:40.769942 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 16:01:40.770247 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 16:01:40.770607 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 16:01:40.770940 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 16:01:40.771987 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 16:01:40.772276 systemd[1]: Reload requested from client PID 1690 ('systemctl') (unit ensure-sysext.service)... Nov 5 16:01:40.772294 systemd[1]: Reloading... Nov 5 16:01:40.772482 systemd-tmpfiles[1692]: ACLs are not supported, ignoring. Nov 5 16:01:40.772587 systemd-tmpfiles[1692]: ACLs are not supported, ignoring. Nov 5 16:01:40.781244 systemd-tmpfiles[1692]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 16:01:40.781256 systemd-tmpfiles[1692]: Skipping /boot Nov 5 16:01:40.791657 systemd-tmpfiles[1692]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 16:01:40.791767 systemd-tmpfiles[1692]: Skipping /boot Nov 5 16:01:40.851486 zram_generator::config[1726]: No configuration found. Nov 5 16:01:41.088002 systemd[1]: Reloading finished in 315 ms. Nov 5 16:01:41.118352 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 16:01:41.136018 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 16:01:41.147162 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 16:01:41.152764 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 16:01:41.155878 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 16:01:41.166807 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 16:01:41.171942 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 16:01:41.179134 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:01:41.179431 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 16:01:41.182860 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 16:01:41.193874 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 16:01:41.198498 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 16:01:41.199312 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 16:01:41.199535 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 16:01:41.199696 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:01:41.210249 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 16:01:41.210504 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 16:01:41.214158 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:01:41.215133 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 16:01:41.215626 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 16:01:41.215855 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 16:01:41.216098 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:01:41.223250 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 16:01:41.223683 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 16:01:41.225202 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 16:01:41.225709 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 16:01:41.240013 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 16:01:41.248451 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:01:41.248880 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 16:01:41.250707 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 16:01:41.256473 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 16:01:41.262040 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 16:01:41.267413 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 16:01:41.268732 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 16:01:41.268798 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 16:01:41.268886 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 16:01:41.270345 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 16:01:41.274706 systemd[1]: Finished ensure-sysext.service. Nov 5 16:01:41.280743 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 16:01:41.282013 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 16:01:41.282248 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 16:01:41.286974 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 16:01:41.287385 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 16:01:41.294304 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 16:01:41.294698 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 16:01:41.295653 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 16:01:41.295926 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 16:01:41.297289 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 16:01:41.297393 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 16:01:41.407596 augenrules[1822]: No rules Nov 5 16:01:41.409432 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 16:01:41.410055 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 16:01:41.426096 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 16:01:41.426875 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 16:01:41.721626 systemd-networkd[1473]: eth0: Gained IPv6LL Nov 5 16:01:41.724464 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 16:01:41.725468 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 16:01:44.510756 ldconfig[1782]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 16:01:44.517194 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 16:01:44.523622 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 16:01:44.543527 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 16:01:44.544434 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 16:01:44.545052 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 16:01:44.545520 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 16:01:44.545933 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 5 16:01:44.546707 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 16:01:44.547189 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 16:01:44.547599 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 16:01:44.547961 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 16:01:44.548014 systemd[1]: Reached target paths.target - Path Units. Nov 5 16:01:44.548418 systemd[1]: Reached target timers.target - Timer Units. Nov 5 16:01:44.550037 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 16:01:44.551751 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 16:01:44.554491 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 16:01:44.555205 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 16:01:44.555705 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 16:01:44.558047 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 16:01:44.558819 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 16:01:44.559943 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 16:01:44.561510 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 16:01:44.561928 systemd[1]: Reached target basic.target - Basic System. Nov 5 16:01:44.562346 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 16:01:44.562390 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 16:01:44.563544 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 16:01:44.565995 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 5 16:01:44.570596 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 16:01:44.576772 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 16:01:44.581597 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 16:01:44.585642 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 16:01:44.586232 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 16:01:44.591544 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 5 16:01:44.603726 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:01:44.618329 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 16:01:44.621361 systemd[1]: Started ntpd.service - Network Time Service. Nov 5 16:01:44.624821 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 16:01:44.633546 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 16:01:44.641305 jq[1839]: false Nov 5 16:01:44.650111 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 5 16:01:44.660769 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 16:01:44.675391 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 16:01:44.677031 google_oslogin_nss_cache[1841]: oslogin_cache_refresh[1841]: Refreshing passwd entry cache Nov 5 16:01:44.676874 oslogin_cache_refresh[1841]: Refreshing passwd entry cache Nov 5 16:01:44.691708 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 16:01:44.692365 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 16:01:44.693126 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 16:01:44.702594 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 16:01:44.714433 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 16:01:44.716732 oslogin_cache_refresh[1841]: Failure getting users, quitting Nov 5 16:01:44.723934 google_oslogin_nss_cache[1841]: oslogin_cache_refresh[1841]: Failure getting users, quitting Nov 5 16:01:44.723934 google_oslogin_nss_cache[1841]: oslogin_cache_refresh[1841]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 16:01:44.723934 google_oslogin_nss_cache[1841]: oslogin_cache_refresh[1841]: Refreshing group entry cache Nov 5 16:01:44.723934 google_oslogin_nss_cache[1841]: oslogin_cache_refresh[1841]: Failure getting groups, quitting Nov 5 16:01:44.723934 google_oslogin_nss_cache[1841]: oslogin_cache_refresh[1841]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 16:01:44.716755 oslogin_cache_refresh[1841]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 16:01:44.716821 oslogin_cache_refresh[1841]: Refreshing group entry cache Nov 5 16:01:44.719768 oslogin_cache_refresh[1841]: Failure getting groups, quitting Nov 5 16:01:44.719783 oslogin_cache_refresh[1841]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 16:01:44.731505 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 16:01:44.734210 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 16:01:44.735633 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 16:01:44.736064 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 5 16:01:44.736325 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 5 16:01:44.742301 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 16:01:44.768212 extend-filesystems[1840]: Found /dev/nvme0n1p6 Nov 5 16:01:44.818339 jq[1864]: true Nov 5 16:01:44.816392 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 16:01:44.816700 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 16:01:44.821584 update_engine[1861]: I20251105 16:01:44.819892 1861 main.cc:92] Flatcar Update Engine starting Nov 5 16:01:44.851420 systemd-logind[1855]: Watching system buttons on /dev/input/event2 (Power Button) Nov 5 16:01:44.851511 systemd-logind[1855]: Watching system buttons on /dev/input/event3 (Sleep Button) Nov 5 16:01:44.851539 systemd-logind[1855]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 5 16:01:44.858615 systemd-logind[1855]: New seat seat0. Nov 5 16:01:44.863490 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 16:01:44.868086 jq[1896]: true Nov 5 16:01:44.882111 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 16:01:44.881857 dbus-daemon[1837]: [system] SELinux support is enabled Nov 5 16:01:44.887827 (ntainerd)[1899]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 16:01:44.891777 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 16:01:44.891829 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 16:01:44.892503 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 16:01:44.892532 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 16:01:44.901857 extend-filesystems[1840]: Found /dev/nvme0n1p9 Nov 5 16:01:44.906327 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 5 16:01:44.911730 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 5 16:01:44.914184 ntpd[1844]: ntpd 4.2.8p18@1.4062-o Wed Nov 5 13:12:24 UTC 2025 (1): Starting Nov 5 16:01:44.914265 ntpd[1844]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 5 16:01:44.914611 ntpd[1844]: 5 Nov 16:01:44 ntpd[1844]: ntpd 4.2.8p18@1.4062-o Wed Nov 5 13:12:24 UTC 2025 (1): Starting Nov 5 16:01:44.914611 ntpd[1844]: 5 Nov 16:01:44 ntpd[1844]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 5 16:01:44.914611 ntpd[1844]: 5 Nov 16:01:44 ntpd[1844]: ---------------------------------------------------- Nov 5 16:01:44.914611 ntpd[1844]: 5 Nov 16:01:44 ntpd[1844]: ntp-4 is maintained by Network Time Foundation, Nov 5 16:01:44.914611 ntpd[1844]: 5 Nov 16:01:44 ntpd[1844]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 5 16:01:44.914611 ntpd[1844]: 5 Nov 16:01:44 ntpd[1844]: corporation. Support and training for ntp-4 are Nov 5 16:01:44.914611 ntpd[1844]: 5 Nov 16:01:44 ntpd[1844]: available at https://www.nwtime.org/support Nov 5 16:01:44.914611 ntpd[1844]: 5 Nov 16:01:44 ntpd[1844]: ---------------------------------------------------- Nov 5 16:01:44.914276 ntpd[1844]: ---------------------------------------------------- Nov 5 16:01:44.914285 ntpd[1844]: ntp-4 is maintained by Network Time Foundation, Nov 5 16:01:44.914295 ntpd[1844]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 5 16:01:44.914305 ntpd[1844]: corporation. Support and training for ntp-4 are Nov 5 16:01:44.914314 ntpd[1844]: available at https://www.nwtime.org/support Nov 5 16:01:44.914323 ntpd[1844]: ---------------------------------------------------- Nov 5 16:01:44.931042 extend-filesystems[1840]: Checking size of /dev/nvme0n1p9 Nov 5 16:01:44.935246 dbus-daemon[1837]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 5 16:01:44.936534 ntpd[1844]: proto: precision = 0.063 usec (-24) Nov 5 16:01:44.936680 ntpd[1844]: 5 Nov 16:01:44 ntpd[1844]: proto: precision = 0.063 usec (-24) Nov 5 16:01:44.937376 ntpd[1844]: basedate set to 2025-10-24 Nov 5 16:01:44.945626 ntpd[1844]: 5 Nov 16:01:44 ntpd[1844]: basedate set to 2025-10-24 Nov 5 16:01:44.945626 ntpd[1844]: 5 Nov 16:01:44 ntpd[1844]: gps base set to 2025-10-26 (week 2390) Nov 5 16:01:44.945626 ntpd[1844]: 5 Nov 16:01:44 ntpd[1844]: Listen and drop on 0 v6wildcard [::]:123 Nov 5 16:01:44.945626 ntpd[1844]: 5 Nov 16:01:44 ntpd[1844]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 5 16:01:44.945626 ntpd[1844]: 5 Nov 16:01:44 ntpd[1844]: Listen normally on 2 lo 127.0.0.1:123 Nov 5 16:01:44.945626 ntpd[1844]: 5 Nov 16:01:44 ntpd[1844]: Listen normally on 3 eth0 172.31.16.11:123 Nov 5 16:01:44.945626 ntpd[1844]: 5 Nov 16:01:44 ntpd[1844]: Listen normally on 4 lo [::1]:123 Nov 5 16:01:44.945626 ntpd[1844]: 5 Nov 16:01:44 ntpd[1844]: Listen normally on 5 eth0 [fe80::44d:bfff:fe04:ab21%2]:123 Nov 5 16:01:44.945626 ntpd[1844]: 5 Nov 16:01:44 ntpd[1844]: Listening on routing socket on fd #22 for interface updates Nov 5 16:01:44.937400 ntpd[1844]: gps base set to 2025-10-26 (week 2390) Nov 5 16:01:44.937558 ntpd[1844]: Listen and drop on 0 v6wildcard [::]:123 Nov 5 16:01:44.937589 ntpd[1844]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 5 16:01:44.937815 ntpd[1844]: Listen normally on 2 lo 127.0.0.1:123 Nov 5 16:01:44.937842 ntpd[1844]: Listen normally on 3 eth0 172.31.16.11:123 Nov 5 16:01:44.937871 ntpd[1844]: Listen normally on 4 lo [::1]:123 Nov 5 16:01:44.937899 ntpd[1844]: Listen normally on 5 eth0 [fe80::44d:bfff:fe04:ab21%2]:123 Nov 5 16:01:44.937925 ntpd[1844]: Listening on routing socket on fd #22 for interface updates Nov 5 16:01:44.954234 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 16:01:44.955778 update_engine[1861]: I20251105 16:01:44.953342 1861 update_check_scheduler.cc:74] Next update check in 4m38s Nov 5 16:01:44.949774 dbus-daemon[1837]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1473 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 5 16:01:44.958388 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 16:01:44.966132 systemd[1]: Started update-engine.service - Update Engine. Nov 5 16:01:44.982744 tar[1878]: linux-amd64/LICENSE Nov 5 16:01:44.983058 ntpd[1844]: 5 Nov 16:01:44 ntpd[1844]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 5 16:01:44.983058 ntpd[1844]: 5 Nov 16:01:44 ntpd[1844]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 5 16:01:44.966190 ntpd[1844]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 5 16:01:44.966225 ntpd[1844]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 5 16:01:44.984567 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 5 16:01:44.987679 tar[1878]: linux-amd64/helm Nov 5 16:01:44.988887 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 16:01:44.997823 coreos-metadata[1836]: Nov 05 16:01:44.993 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 5 16:01:45.005397 extend-filesystems[1840]: Resized partition /dev/nvme0n1p9 Nov 5 16:01:45.008162 coreos-metadata[1836]: Nov 05 16:01:45.002 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 5 16:01:45.008162 coreos-metadata[1836]: Nov 05 16:01:45.007 INFO Fetch successful Nov 5 16:01:45.008266 extend-filesystems[1931]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 16:01:45.017622 coreos-metadata[1836]: Nov 05 16:01:45.010 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 5 16:01:45.017953 coreos-metadata[1836]: Nov 05 16:01:45.017 INFO Fetch successful Nov 5 16:01:45.017953 coreos-metadata[1836]: Nov 05 16:01:45.017 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 5 16:01:45.020755 coreos-metadata[1836]: Nov 05 16:01:45.019 INFO Fetch successful Nov 5 16:01:45.020755 coreos-metadata[1836]: Nov 05 16:01:45.019 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 5 16:01:45.020755 coreos-metadata[1836]: Nov 05 16:01:45.020 INFO Fetch successful Nov 5 16:01:45.020755 coreos-metadata[1836]: Nov 05 16:01:45.020 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 5 16:01:45.020975 coreos-metadata[1836]: Nov 05 16:01:45.020 INFO Fetch failed with 404: resource not found Nov 5 16:01:45.021026 coreos-metadata[1836]: Nov 05 16:01:45.021 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 5 16:01:45.024196 coreos-metadata[1836]: Nov 05 16:01:45.021 INFO Fetch successful Nov 5 16:01:45.024196 coreos-metadata[1836]: Nov 05 16:01:45.021 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 5 16:01:45.024196 coreos-metadata[1836]: Nov 05 16:01:45.022 INFO Fetch successful Nov 5 16:01:45.024196 coreos-metadata[1836]: Nov 05 16:01:45.022 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 5 16:01:45.024196 coreos-metadata[1836]: Nov 05 16:01:45.022 INFO Fetch successful Nov 5 16:01:45.024196 coreos-metadata[1836]: Nov 05 16:01:45.022 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 5 16:01:45.024196 coreos-metadata[1836]: Nov 05 16:01:45.023 INFO Fetch successful Nov 5 16:01:45.024196 coreos-metadata[1836]: Nov 05 16:01:45.023 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 5 16:01:45.024196 coreos-metadata[1836]: Nov 05 16:01:45.023 INFO Fetch successful Nov 5 16:01:45.027620 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Nov 5 16:01:45.053232 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Nov 5 16:01:45.072205 extend-filesystems[1931]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 5 16:01:45.072205 extend-filesystems[1931]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 5 16:01:45.072205 extend-filesystems[1931]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Nov 5 16:01:45.079638 extend-filesystems[1840]: Resized filesystem in /dev/nvme0n1p9 Nov 5 16:01:45.076986 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 16:01:45.077283 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 16:01:45.118053 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 5 16:01:45.121059 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 16:01:45.158472 bash[1952]: Updated "/home/core/.ssh/authorized_keys" Nov 5 16:01:45.161982 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 16:01:45.171404 systemd[1]: Starting sshkeys.service... Nov 5 16:01:45.241157 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 5 16:01:45.281001 dbus-daemon[1837]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 5 16:01:45.288638 systemd[1]: Starting polkit.service - Authorization Manager... Nov 5 16:01:45.281814 dbus-daemon[1837]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1927 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 5 16:01:45.312311 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 5 16:01:45.316926 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 5 16:01:45.624814 amazon-ssm-agent[1907]: Initializing new seelog logger Nov 5 16:01:45.625180 amazon-ssm-agent[1907]: New Seelog Logger Creation Complete Nov 5 16:01:45.625180 amazon-ssm-agent[1907]: 2025/11/05 16:01:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 16:01:45.625180 amazon-ssm-agent[1907]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 16:01:45.641358 amazon-ssm-agent[1907]: 2025/11/05 16:01:45 processing appconfig overrides Nov 5 16:01:45.641870 amazon-ssm-agent[1907]: 2025/11/05 16:01:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 16:01:45.641870 amazon-ssm-agent[1907]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 16:01:45.641997 amazon-ssm-agent[1907]: 2025/11/05 16:01:45 processing appconfig overrides Nov 5 16:01:45.642321 amazon-ssm-agent[1907]: 2025/11/05 16:01:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 16:01:45.642321 amazon-ssm-agent[1907]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 16:01:45.642422 amazon-ssm-agent[1907]: 2025/11/05 16:01:45 processing appconfig overrides Nov 5 16:01:45.644181 amazon-ssm-agent[1907]: 2025-11-05 16:01:45.6416 INFO Proxy environment variables: Nov 5 16:01:45.660040 amazon-ssm-agent[1907]: 2025/11/05 16:01:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 16:01:45.660040 amazon-ssm-agent[1907]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 16:01:45.660209 amazon-ssm-agent[1907]: 2025/11/05 16:01:45 processing appconfig overrides Nov 5 16:01:45.672774 coreos-metadata[1984]: Nov 05 16:01:45.672 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 5 16:01:45.674582 coreos-metadata[1984]: Nov 05 16:01:45.674 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 5 16:01:45.676665 coreos-metadata[1984]: Nov 05 16:01:45.675 INFO Fetch successful Nov 5 16:01:45.676665 coreos-metadata[1984]: Nov 05 16:01:45.675 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 5 16:01:45.681432 coreos-metadata[1984]: Nov 05 16:01:45.681 INFO Fetch successful Nov 5 16:01:45.688550 unknown[1984]: wrote ssh authorized keys file for user: core Nov 5 16:01:45.723228 locksmithd[1930]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 16:01:45.755733 amazon-ssm-agent[1907]: 2025-11-05 16:01:45.6418 INFO https_proxy: Nov 5 16:01:45.776729 update-ssh-keys[2070]: Updated "/home/core/.ssh/authorized_keys" Nov 5 16:01:45.773986 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 5 16:01:45.781954 systemd[1]: Finished sshkeys.service. Nov 5 16:01:45.855848 amazon-ssm-agent[1907]: 2025-11-05 16:01:45.6418 INFO http_proxy: Nov 5 16:01:45.866796 polkitd[1980]: Started polkitd version 126 Nov 5 16:01:45.868734 sshd_keygen[1868]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 16:01:45.884145 polkitd[1980]: Loading rules from directory /etc/polkit-1/rules.d Nov 5 16:01:45.884699 polkitd[1980]: Loading rules from directory /run/polkit-1/rules.d Nov 5 16:01:45.884751 polkitd[1980]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 5 16:01:45.885162 polkitd[1980]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 5 16:01:45.885186 polkitd[1980]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 5 16:01:45.885232 polkitd[1980]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 5 16:01:45.891537 polkitd[1980]: Finished loading, compiling and executing 2 rules Nov 5 16:01:45.892072 systemd[1]: Started polkit.service - Authorization Manager. Nov 5 16:01:45.896426 dbus-daemon[1837]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 5 16:01:45.902966 polkitd[1980]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 5 16:01:45.935038 systemd-resolved[1455]: System hostname changed to 'ip-172-31-16-11'. Nov 5 16:01:45.935516 systemd-hostnamed[1927]: Hostname set to (transient) Nov 5 16:01:45.954605 amazon-ssm-agent[1907]: 2025-11-05 16:01:45.6418 INFO no_proxy: Nov 5 16:01:45.990258 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 16:01:45.998364 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 16:01:46.038671 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 16:01:46.039343 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 16:01:46.043773 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 16:01:46.053857 amazon-ssm-agent[1907]: 2025-11-05 16:01:45.6419 INFO Checking if agent identity type OnPrem can be assumed Nov 5 16:01:46.094132 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 16:01:46.103913 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 16:01:46.110043 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 16:01:46.111798 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 16:01:46.155677 amazon-ssm-agent[1907]: 2025-11-05 16:01:45.6421 INFO Checking if agent identity type EC2 can be assumed Nov 5 16:01:46.196279 containerd[1899]: time="2025-11-05T16:01:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 16:01:46.197087 containerd[1899]: time="2025-11-05T16:01:46.197046726Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 16:01:46.251363 containerd[1899]: time="2025-11-05T16:01:46.251311451Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.869µs" Nov 5 16:01:46.252000 containerd[1899]: time="2025-11-05T16:01:46.251975930Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 16:01:46.252100 containerd[1899]: time="2025-11-05T16:01:46.252085898Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 16:01:46.252246 amazon-ssm-agent[1907]: 2025-11-05 16:01:45.8347 INFO Agent will take identity from EC2 Nov 5 16:01:46.252886 containerd[1899]: time="2025-11-05T16:01:46.252862325Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 16:01:46.253454 containerd[1899]: time="2025-11-05T16:01:46.253420357Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 16:01:46.253572 containerd[1899]: time="2025-11-05T16:01:46.253554969Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 16:01:46.253712 containerd[1899]: time="2025-11-05T16:01:46.253694752Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 16:01:46.255463 containerd[1899]: time="2025-11-05T16:01:46.254455343Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 16:01:46.255463 containerd[1899]: time="2025-11-05T16:01:46.254756337Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 16:01:46.255463 containerd[1899]: time="2025-11-05T16:01:46.254776567Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 16:01:46.255463 containerd[1899]: time="2025-11-05T16:01:46.254791687Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 16:01:46.255463 containerd[1899]: time="2025-11-05T16:01:46.254804488Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 16:01:46.255463 containerd[1899]: time="2025-11-05T16:01:46.254892044Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 16:01:46.255463 containerd[1899]: time="2025-11-05T16:01:46.255120337Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 16:01:46.255463 containerd[1899]: time="2025-11-05T16:01:46.255164448Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 16:01:46.255463 containerd[1899]: time="2025-11-05T16:01:46.255180020Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 16:01:46.255463 containerd[1899]: time="2025-11-05T16:01:46.255235948Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 16:01:46.257678 containerd[1899]: time="2025-11-05T16:01:46.257648139Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 16:01:46.257866 containerd[1899]: time="2025-11-05T16:01:46.257850637Z" level=info msg="metadata content store policy set" policy=shared Nov 5 16:01:46.263451 containerd[1899]: time="2025-11-05T16:01:46.263393428Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 16:01:46.263591 containerd[1899]: time="2025-11-05T16:01:46.263487244Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 16:01:46.263591 containerd[1899]: time="2025-11-05T16:01:46.263507958Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 16:01:46.263999 containerd[1899]: time="2025-11-05T16:01:46.263945005Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 16:01:46.263999 containerd[1899]: time="2025-11-05T16:01:46.263976572Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 16:01:46.263999 containerd[1899]: time="2025-11-05T16:01:46.263996373Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 16:01:46.264462 containerd[1899]: time="2025-11-05T16:01:46.264027355Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 16:01:46.264462 containerd[1899]: time="2025-11-05T16:01:46.264045499Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 16:01:46.264462 containerd[1899]: time="2025-11-05T16:01:46.264060272Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 16:01:46.264462 containerd[1899]: time="2025-11-05T16:01:46.264075436Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 16:01:46.264462 containerd[1899]: time="2025-11-05T16:01:46.264090182Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 16:01:46.264462 containerd[1899]: time="2025-11-05T16:01:46.264107179Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 16:01:46.264462 containerd[1899]: time="2025-11-05T16:01:46.264280197Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 16:01:46.264462 containerd[1899]: time="2025-11-05T16:01:46.264308069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 16:01:46.264462 containerd[1899]: time="2025-11-05T16:01:46.264344985Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 16:01:46.264462 containerd[1899]: time="2025-11-05T16:01:46.264361523Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 16:01:46.264462 containerd[1899]: time="2025-11-05T16:01:46.264379499Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 16:01:46.264462 containerd[1899]: time="2025-11-05T16:01:46.264400869Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 16:01:46.264462 containerd[1899]: time="2025-11-05T16:01:46.264419191Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 16:01:46.265603 containerd[1899]: time="2025-11-05T16:01:46.264434147Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 16:01:46.265603 containerd[1899]: time="2025-11-05T16:01:46.265512431Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 16:01:46.265603 containerd[1899]: time="2025-11-05T16:01:46.265533552Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 16:01:46.265603 containerd[1899]: time="2025-11-05T16:01:46.265550267Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 16:01:46.266190 containerd[1899]: time="2025-11-05T16:01:46.265635355Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 16:01:46.266190 containerd[1899]: time="2025-11-05T16:01:46.265655860Z" level=info msg="Start snapshots syncer" Nov 5 16:01:46.266190 containerd[1899]: time="2025-11-05T16:01:46.265702507Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 16:01:46.266298 containerd[1899]: time="2025-11-05T16:01:46.266205409Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 16:01:46.266298 containerd[1899]: time="2025-11-05T16:01:46.266273074Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 16:01:46.268114 containerd[1899]: time="2025-11-05T16:01:46.267027978Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 16:01:46.268114 containerd[1899]: time="2025-11-05T16:01:46.267238847Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 16:01:46.268114 containerd[1899]: time="2025-11-05T16:01:46.267282405Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 16:01:46.268114 containerd[1899]: time="2025-11-05T16:01:46.267308302Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 16:01:46.268114 containerd[1899]: time="2025-11-05T16:01:46.267334408Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 16:01:46.268114 containerd[1899]: time="2025-11-05T16:01:46.267360735Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 16:01:46.268114 containerd[1899]: time="2025-11-05T16:01:46.267377826Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 16:01:46.268114 containerd[1899]: time="2025-11-05T16:01:46.267393301Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 16:01:46.269207 containerd[1899]: time="2025-11-05T16:01:46.267435620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 16:01:46.269207 containerd[1899]: time="2025-11-05T16:01:46.268668801Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 16:01:46.269207 containerd[1899]: time="2025-11-05T16:01:46.268687399Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 16:01:46.269207 containerd[1899]: time="2025-11-05T16:01:46.268741723Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 16:01:46.269207 containerd[1899]: time="2025-11-05T16:01:46.268775136Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 16:01:46.269207 containerd[1899]: time="2025-11-05T16:01:46.268788428Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 16:01:46.269207 containerd[1899]: time="2025-11-05T16:01:46.268801435Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 16:01:46.269207 containerd[1899]: time="2025-11-05T16:01:46.268813303Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 16:01:46.269207 containerd[1899]: time="2025-11-05T16:01:46.268828040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 16:01:46.269207 containerd[1899]: time="2025-11-05T16:01:46.268842945Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 16:01:46.269207 containerd[1899]: time="2025-11-05T16:01:46.268863906Z" level=info msg="runtime interface created" Nov 5 16:01:46.269207 containerd[1899]: time="2025-11-05T16:01:46.268871436Z" level=info msg="created NRI interface" Nov 5 16:01:46.269207 containerd[1899]: time="2025-11-05T16:01:46.268883111Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 16:01:46.269207 containerd[1899]: time="2025-11-05T16:01:46.268901458Z" level=info msg="Connect containerd service" Nov 5 16:01:46.269207 containerd[1899]: time="2025-11-05T16:01:46.268950946Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 16:01:46.271878 containerd[1899]: time="2025-11-05T16:01:46.271835003Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 16:01:46.321848 tar[1878]: linux-amd64/README.md Nov 5 16:01:46.342494 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 16:01:46.351586 amazon-ssm-agent[1907]: 2025-11-05 16:01:45.8400 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Nov 5 16:01:46.450915 amazon-ssm-agent[1907]: 2025-11-05 16:01:45.8400 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Nov 5 16:01:46.550091 amazon-ssm-agent[1907]: 2025-11-05 16:01:45.8400 INFO [amazon-ssm-agent] Starting Core Agent Nov 5 16:01:46.650330 amazon-ssm-agent[1907]: 2025-11-05 16:01:45.8401 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Nov 5 16:01:46.749976 amazon-ssm-agent[1907]: 2025-11-05 16:01:45.8401 INFO [Registrar] Starting registrar module Nov 5 16:01:46.802020 amazon-ssm-agent[1907]: 2025/11/05 16:01:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 16:01:46.802020 amazon-ssm-agent[1907]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 16:01:46.802020 amazon-ssm-agent[1907]: 2025/11/05 16:01:46 processing appconfig overrides Nov 5 16:01:46.827415 containerd[1899]: time="2025-11-05T16:01:46.827371789Z" level=info msg="Start subscribing containerd event" Nov 5 16:01:46.827575 containerd[1899]: time="2025-11-05T16:01:46.827561217Z" level=info msg="Start recovering state" Nov 5 16:01:46.827738 containerd[1899]: time="2025-11-05T16:01:46.827622242Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 16:01:46.827738 containerd[1899]: time="2025-11-05T16:01:46.827704944Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 16:01:46.827738 containerd[1899]: time="2025-11-05T16:01:46.827706026Z" level=info msg="Start event monitor" Nov 5 16:01:46.827846 containerd[1899]: time="2025-11-05T16:01:46.827748219Z" level=info msg="Start cni network conf syncer for default" Nov 5 16:01:46.827846 containerd[1899]: time="2025-11-05T16:01:46.827759202Z" level=info msg="Start streaming server" Nov 5 16:01:46.827846 containerd[1899]: time="2025-11-05T16:01:46.827766939Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 16:01:46.827846 containerd[1899]: time="2025-11-05T16:01:46.827775382Z" level=info msg="runtime interface starting up..." Nov 5 16:01:46.827846 containerd[1899]: time="2025-11-05T16:01:46.827781095Z" level=info msg="starting plugins..." Nov 5 16:01:46.827846 containerd[1899]: time="2025-11-05T16:01:46.827807854Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 16:01:46.827966 containerd[1899]: time="2025-11-05T16:01:46.827917787Z" level=info msg="containerd successfully booted in 0.633880s" Nov 5 16:01:46.828103 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 16:01:46.836594 amazon-ssm-agent[1907]: 2025-11-05 16:01:45.8460 INFO [EC2Identity] Checking disk for registration info Nov 5 16:01:46.837617 amazon-ssm-agent[1907]: 2025-11-05 16:01:45.8460 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Nov 5 16:01:46.837836 amazon-ssm-agent[1907]: 2025-11-05 16:01:45.8460 INFO [EC2Identity] Generating registration keypair Nov 5 16:01:46.837974 amazon-ssm-agent[1907]: 2025-11-05 16:01:46.7458 INFO [EC2Identity] Checking write access before registering Nov 5 16:01:46.837974 amazon-ssm-agent[1907]: 2025-11-05 16:01:46.7462 INFO [EC2Identity] Registering EC2 instance with Systems Manager Nov 5 16:01:46.837974 amazon-ssm-agent[1907]: 2025-11-05 16:01:46.8015 INFO [EC2Identity] EC2 registration was successful. Nov 5 16:01:46.837974 amazon-ssm-agent[1907]: 2025-11-05 16:01:46.8016 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Nov 5 16:01:46.837974 amazon-ssm-agent[1907]: 2025-11-05 16:01:46.8016 INFO [CredentialRefresher] credentialRefresher has started Nov 5 16:01:46.839820 amazon-ssm-agent[1907]: 2025-11-05 16:01:46.8016 INFO [CredentialRefresher] Starting credentials refresher loop Nov 5 16:01:46.839820 amazon-ssm-agent[1907]: 2025-11-05 16:01:46.8357 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 5 16:01:46.839820 amazon-ssm-agent[1907]: 2025-11-05 16:01:46.8365 INFO [CredentialRefresher] Credentials ready Nov 5 16:01:46.849377 amazon-ssm-agent[1907]: 2025-11-05 16:01:46.8395 INFO [CredentialRefresher] Next credential rotation will be in 29.999935881766667 minutes Nov 5 16:01:47.851992 amazon-ssm-agent[1907]: 2025-11-05 16:01:47.8518 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 5 16:01:47.953331 amazon-ssm-agent[1907]: 2025-11-05 16:01:47.8540 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2119) started Nov 5 16:01:48.053822 amazon-ssm-agent[1907]: 2025-11-05 16:01:47.8540 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 5 16:01:49.560104 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:01:49.562079 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 16:01:49.564921 systemd[1]: Startup finished in 3.230s (kernel) + 10.511s (initrd) + 14.131s (userspace) = 27.873s. Nov 5 16:01:49.573512 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 16:01:50.817708 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 16:01:50.821504 systemd[1]: Started sshd@0-172.31.16.11:22-139.178.68.195:60524.service - OpenSSH per-connection server daemon (139.178.68.195:60524). Nov 5 16:01:51.209304 kubelet[2136]: E1105 16:01:51.209075 2136 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 16:01:51.211962 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 16:01:51.212112 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 16:01:51.212867 systemd[1]: kubelet.service: Consumed 1.047s CPU time, 256.6M memory peak. Nov 5 16:01:51.323022 sshd[2146]: Accepted publickey for core from 139.178.68.195 port 60524 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:01:51.334794 sshd-session[2146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:01:51.346872 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 16:01:51.348935 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 16:01:51.359415 systemd-logind[1855]: New session 1 of user core. Nov 5 16:01:51.374163 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 16:01:51.378020 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 16:01:51.395269 (systemd)[2152]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 16:01:51.398361 systemd-logind[1855]: New session c1 of user core. Nov 5 16:01:51.568772 systemd[2152]: Queued start job for default target default.target. Nov 5 16:01:51.581660 systemd[2152]: Created slice app.slice - User Application Slice. Nov 5 16:01:51.581707 systemd[2152]: Reached target paths.target - Paths. Nov 5 16:01:51.581905 systemd[2152]: Reached target timers.target - Timers. Nov 5 16:01:51.583852 systemd[2152]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 16:01:51.597636 systemd[2152]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 16:01:51.597721 systemd[2152]: Reached target sockets.target - Sockets. Nov 5 16:01:51.597781 systemd[2152]: Reached target basic.target - Basic System. Nov 5 16:01:51.597836 systemd[2152]: Reached target default.target - Main User Target. Nov 5 16:01:51.597879 systemd[2152]: Startup finished in 191ms. Nov 5 16:01:51.598097 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 16:01:51.605979 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 16:01:51.760625 systemd[1]: Started sshd@1-172.31.16.11:22-139.178.68.195:56000.service - OpenSSH per-connection server daemon (139.178.68.195:56000). Nov 5 16:01:53.938646 systemd-resolved[1455]: Clock change detected. Flushing caches. Nov 5 16:01:53.978473 sshd[2163]: Accepted publickey for core from 139.178.68.195 port 56000 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:01:53.980902 sshd-session[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:01:53.986969 systemd-logind[1855]: New session 2 of user core. Nov 5 16:01:53.996376 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 16:01:54.124118 sshd[2166]: Connection closed by 139.178.68.195 port 56000 Nov 5 16:01:54.124719 sshd-session[2163]: pam_unix(sshd:session): session closed for user core Nov 5 16:01:54.128992 systemd[1]: sshd@1-172.31.16.11:22-139.178.68.195:56000.service: Deactivated successfully. Nov 5 16:01:54.131028 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 16:01:54.132331 systemd-logind[1855]: Session 2 logged out. Waiting for processes to exit. Nov 5 16:01:54.134316 systemd-logind[1855]: Removed session 2. Nov 5 16:01:54.162398 systemd[1]: Started sshd@2-172.31.16.11:22-139.178.68.195:56012.service - OpenSSH per-connection server daemon (139.178.68.195:56012). Nov 5 16:01:54.347267 sshd[2172]: Accepted publickey for core from 139.178.68.195 port 56012 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:01:54.349557 sshd-session[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:01:54.356062 systemd-logind[1855]: New session 3 of user core. Nov 5 16:01:54.367457 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 16:01:54.489211 sshd[2175]: Connection closed by 139.178.68.195 port 56012 Nov 5 16:01:54.490001 sshd-session[2172]: pam_unix(sshd:session): session closed for user core Nov 5 16:01:54.496773 systemd[1]: sshd@2-172.31.16.11:22-139.178.68.195:56012.service: Deactivated successfully. Nov 5 16:01:54.499296 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 16:01:54.500560 systemd-logind[1855]: Session 3 logged out. Waiting for processes to exit. Nov 5 16:01:54.502164 systemd-logind[1855]: Removed session 3. Nov 5 16:01:54.534857 systemd[1]: Started sshd@3-172.31.16.11:22-139.178.68.195:56016.service - OpenSSH per-connection server daemon (139.178.68.195:56016). Nov 5 16:01:54.726966 sshd[2181]: Accepted publickey for core from 139.178.68.195 port 56016 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:01:54.728216 sshd-session[2181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:01:54.734057 systemd-logind[1855]: New session 4 of user core. Nov 5 16:01:54.749544 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 16:01:54.887198 sshd[2184]: Connection closed by 139.178.68.195 port 56016 Nov 5 16:01:54.888110 sshd-session[2181]: pam_unix(sshd:session): session closed for user core Nov 5 16:01:54.893614 systemd[1]: sshd@3-172.31.16.11:22-139.178.68.195:56016.service: Deactivated successfully. Nov 5 16:01:54.895767 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 16:01:54.896959 systemd-logind[1855]: Session 4 logged out. Waiting for processes to exit. Nov 5 16:01:54.898671 systemd-logind[1855]: Removed session 4. Nov 5 16:01:54.924774 systemd[1]: Started sshd@4-172.31.16.11:22-139.178.68.195:56018.service - OpenSSH per-connection server daemon (139.178.68.195:56018). Nov 5 16:01:55.114692 sshd[2190]: Accepted publickey for core from 139.178.68.195 port 56018 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:01:55.116824 sshd-session[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:01:55.123171 systemd-logind[1855]: New session 5 of user core. Nov 5 16:01:55.126230 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 16:01:55.300531 sudo[2194]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 16:01:55.300914 sudo[2194]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 16:01:55.315477 sudo[2194]: pam_unix(sudo:session): session closed for user root Nov 5 16:01:55.344929 sshd[2193]: Connection closed by 139.178.68.195 port 56018 Nov 5 16:01:55.347289 sshd-session[2190]: pam_unix(sshd:session): session closed for user core Nov 5 16:01:55.355732 systemd[1]: sshd@4-172.31.16.11:22-139.178.68.195:56018.service: Deactivated successfully. Nov 5 16:01:55.358500 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 16:01:55.359836 systemd-logind[1855]: Session 5 logged out. Waiting for processes to exit. Nov 5 16:01:55.388445 systemd-logind[1855]: Removed session 5. Nov 5 16:01:55.391562 systemd[1]: Started sshd@5-172.31.16.11:22-139.178.68.195:56022.service - OpenSSH per-connection server daemon (139.178.68.195:56022). Nov 5 16:01:55.627134 sshd[2200]: Accepted publickey for core from 139.178.68.195 port 56022 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:01:55.629537 sshd-session[2200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:01:55.649492 systemd-logind[1855]: New session 6 of user core. Nov 5 16:01:55.659454 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 16:01:55.774463 sudo[2205]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 16:01:55.774838 sudo[2205]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 16:01:55.792063 sudo[2205]: pam_unix(sudo:session): session closed for user root Nov 5 16:01:55.800881 sudo[2204]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 16:01:55.801406 sudo[2204]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 16:01:55.813187 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 16:01:55.884115 augenrules[2227]: No rules Nov 5 16:01:55.885565 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 16:01:55.885960 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 16:01:55.887262 sudo[2204]: pam_unix(sudo:session): session closed for user root Nov 5 16:01:55.912486 sshd[2203]: Connection closed by 139.178.68.195 port 56022 Nov 5 16:01:55.913630 sshd-session[2200]: pam_unix(sshd:session): session closed for user core Nov 5 16:01:55.928778 systemd[1]: sshd@5-172.31.16.11:22-139.178.68.195:56022.service: Deactivated successfully. Nov 5 16:01:55.932259 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 16:01:55.936242 systemd-logind[1855]: Session 6 logged out. Waiting for processes to exit. Nov 5 16:01:55.962758 systemd[1]: Started sshd@6-172.31.16.11:22-139.178.68.195:56032.service - OpenSSH per-connection server daemon (139.178.68.195:56032). Nov 5 16:01:55.964845 systemd-logind[1855]: Removed session 6. Nov 5 16:01:56.157695 sshd[2236]: Accepted publickey for core from 139.178.68.195 port 56032 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:01:56.160136 sshd-session[2236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:01:56.185346 systemd-logind[1855]: New session 7 of user core. Nov 5 16:01:56.196260 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 16:01:56.303468 sudo[2240]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 16:01:56.303858 sudo[2240]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 16:01:57.860557 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 16:01:57.884582 (dockerd)[2259]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 16:01:58.901240 dockerd[2259]: time="2025-11-05T16:01:58.901114533Z" level=info msg="Starting up" Nov 5 16:01:58.902676 dockerd[2259]: time="2025-11-05T16:01:58.902635218Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 16:01:58.914546 dockerd[2259]: time="2025-11-05T16:01:58.914499098Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 16:01:58.963524 dockerd[2259]: time="2025-11-05T16:01:58.963484916Z" level=info msg="Loading containers: start." Nov 5 16:01:58.977026 kernel: Initializing XFRM netlink socket Nov 5 16:01:59.534477 (udev-worker)[2279]: Network interface NamePolicy= disabled on kernel command line. Nov 5 16:01:59.605555 systemd-networkd[1473]: docker0: Link UP Nov 5 16:01:59.614436 dockerd[2259]: time="2025-11-05T16:01:59.614305580Z" level=info msg="Loading containers: done." Nov 5 16:01:59.635365 dockerd[2259]: time="2025-11-05T16:01:59.635142790Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 16:01:59.635365 dockerd[2259]: time="2025-11-05T16:01:59.635233904Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 16:01:59.635365 dockerd[2259]: time="2025-11-05T16:01:59.635320511Z" level=info msg="Initializing buildkit" Nov 5 16:01:59.675026 dockerd[2259]: time="2025-11-05T16:01:59.674887240Z" level=info msg="Completed buildkit initialization" Nov 5 16:01:59.685077 dockerd[2259]: time="2025-11-05T16:01:59.685022592Z" level=info msg="Daemon has completed initialization" Nov 5 16:01:59.685435 dockerd[2259]: time="2025-11-05T16:01:59.685203756Z" level=info msg="API listen on /run/docker.sock" Nov 5 16:01:59.685363 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 16:02:01.833362 containerd[1899]: time="2025-11-05T16:02:01.833301208Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 5 16:02:02.806813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2880604211.mount: Deactivated successfully. Nov 5 16:02:03.491960 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 16:02:03.507466 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:02:03.844187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:02:03.856475 (kubelet)[2537]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 16:02:03.926003 kubelet[2537]: E1105 16:02:03.925942 2537 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 16:02:03.933511 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 16:02:03.933698 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 16:02:03.934846 systemd[1]: kubelet.service: Consumed 223ms CPU time, 108.5M memory peak. Nov 5 16:02:04.486556 containerd[1899]: time="2025-11-05T16:02:04.486488109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:04.488793 containerd[1899]: time="2025-11-05T16:02:04.488729561Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Nov 5 16:02:04.490246 containerd[1899]: time="2025-11-05T16:02:04.490175873Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:04.494825 containerd[1899]: time="2025-11-05T16:02:04.494768218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:04.496275 containerd[1899]: time="2025-11-05T16:02:04.496072826Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.662719496s" Nov 5 16:02:04.496275 containerd[1899]: time="2025-11-05T16:02:04.496124768Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 5 16:02:04.496732 containerd[1899]: time="2025-11-05T16:02:04.496691271Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 5 16:02:06.228868 containerd[1899]: time="2025-11-05T16:02:06.228805876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:06.234446 containerd[1899]: time="2025-11-05T16:02:06.234207597Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Nov 5 16:02:06.239304 containerd[1899]: time="2025-11-05T16:02:06.239209352Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:06.245072 containerd[1899]: time="2025-11-05T16:02:06.245010562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:06.246598 containerd[1899]: time="2025-11-05T16:02:06.245845155Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.749032143s" Nov 5 16:02:06.246598 containerd[1899]: time="2025-11-05T16:02:06.245881674Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 5 16:02:06.247157 containerd[1899]: time="2025-11-05T16:02:06.247129829Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 5 16:02:07.596266 containerd[1899]: time="2025-11-05T16:02:07.596212873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:07.597501 containerd[1899]: time="2025-11-05T16:02:07.597461718Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Nov 5 16:02:07.599015 containerd[1899]: time="2025-11-05T16:02:07.598718287Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:07.601577 containerd[1899]: time="2025-11-05T16:02:07.601517709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:07.602411 containerd[1899]: time="2025-11-05T16:02:07.602382786Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.355210812s" Nov 5 16:02:07.602949 containerd[1899]: time="2025-11-05T16:02:07.602763959Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 5 16:02:07.605212 containerd[1899]: time="2025-11-05T16:02:07.604333049Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 5 16:02:08.833058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3422993138.mount: Deactivated successfully. Nov 5 16:02:09.220581 containerd[1899]: time="2025-11-05T16:02:09.220447147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:09.221874 containerd[1899]: time="2025-11-05T16:02:09.221660247Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Nov 5 16:02:09.222949 containerd[1899]: time="2025-11-05T16:02:09.222909344Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:09.225872 containerd[1899]: time="2025-11-05T16:02:09.225790947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:09.226810 containerd[1899]: time="2025-11-05T16:02:09.226610742Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.622225161s" Nov 5 16:02:09.226810 containerd[1899]: time="2025-11-05T16:02:09.226656916Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 5 16:02:09.227198 containerd[1899]: time="2025-11-05T16:02:09.227160052Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 5 16:02:09.867900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3843381736.mount: Deactivated successfully. Nov 5 16:02:11.262543 containerd[1899]: time="2025-11-05T16:02:11.262471947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:11.263389 containerd[1899]: time="2025-11-05T16:02:11.263349765Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Nov 5 16:02:11.265292 containerd[1899]: time="2025-11-05T16:02:11.265257064Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:11.270576 containerd[1899]: time="2025-11-05T16:02:11.270521505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:11.272029 containerd[1899]: time="2025-11-05T16:02:11.271819871Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.044622202s" Nov 5 16:02:11.272029 containerd[1899]: time="2025-11-05T16:02:11.271880457Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 5 16:02:11.272378 containerd[1899]: time="2025-11-05T16:02:11.272354824Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 5 16:02:11.798928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1829052764.mount: Deactivated successfully. Nov 5 16:02:11.806789 containerd[1899]: time="2025-11-05T16:02:11.806717812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:11.807776 containerd[1899]: time="2025-11-05T16:02:11.807461938Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Nov 5 16:02:11.808525 containerd[1899]: time="2025-11-05T16:02:11.808476516Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:11.810408 containerd[1899]: time="2025-11-05T16:02:11.810349819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:11.812691 containerd[1899]: time="2025-11-05T16:02:11.812653128Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 540.156141ms" Nov 5 16:02:11.812780 containerd[1899]: time="2025-11-05T16:02:11.812699596Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 5 16:02:11.813920 containerd[1899]: time="2025-11-05T16:02:11.813555403Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 5 16:02:14.126679 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 16:02:14.130713 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:02:14.439200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:02:14.454079 (kubelet)[2663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 16:02:14.572877 kubelet[2663]: E1105 16:02:14.572787 2663 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 16:02:14.575485 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 16:02:14.575741 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 16:02:14.576574 systemd[1]: kubelet.service: Consumed 224ms CPU time, 109.2M memory peak. Nov 5 16:02:16.021874 containerd[1899]: time="2025-11-05T16:02:16.021686081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:16.023270 containerd[1899]: time="2025-11-05T16:02:16.023231996Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Nov 5 16:02:16.028191 containerd[1899]: time="2025-11-05T16:02:16.028115105Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:16.036865 containerd[1899]: time="2025-11-05T16:02:16.036245664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:16.037738 containerd[1899]: time="2025-11-05T16:02:16.037693751Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 4.224107064s" Nov 5 16:02:16.038076 containerd[1899]: time="2025-11-05T16:02:16.037747427Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 5 16:02:17.993509 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 5 16:02:19.494120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:02:19.494383 systemd[1]: kubelet.service: Consumed 224ms CPU time, 109.2M memory peak. Nov 5 16:02:19.497302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:02:19.533073 systemd[1]: Reload requested from client PID 2703 ('systemctl') (unit session-7.scope)... Nov 5 16:02:19.533276 systemd[1]: Reloading... Nov 5 16:02:19.694015 zram_generator::config[2748]: No configuration found. Nov 5 16:02:20.020882 systemd[1]: Reloading finished in 486 ms. Nov 5 16:02:20.088370 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 16:02:20.088490 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 16:02:20.088862 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:02:20.088921 systemd[1]: kubelet.service: Consumed 149ms CPU time, 98.2M memory peak. Nov 5 16:02:20.091163 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:02:20.388095 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:02:20.400878 (kubelet)[2811]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 16:02:20.553803 kubelet[2811]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 16:02:20.554236 kubelet[2811]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 16:02:20.561385 kubelet[2811]: I1105 16:02:20.561300 2811 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 16:02:20.785928 kubelet[2811]: I1105 16:02:20.785809 2811 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 5 16:02:20.785928 kubelet[2811]: I1105 16:02:20.785841 2811 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 16:02:20.791377 kubelet[2811]: I1105 16:02:20.791312 2811 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 5 16:02:20.791377 kubelet[2811]: I1105 16:02:20.791361 2811 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 16:02:20.792222 kubelet[2811]: I1105 16:02:20.792186 2811 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 16:02:20.816790 kubelet[2811]: I1105 16:02:20.816734 2811 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 16:02:20.822995 kubelet[2811]: E1105 16:02:20.822936 2811 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 16:02:20.854787 kubelet[2811]: I1105 16:02:20.854755 2811 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 16:02:20.863133 kubelet[2811]: I1105 16:02:20.863087 2811 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 5 16:02:20.872481 kubelet[2811]: I1105 16:02:20.872344 2811 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 16:02:20.875051 kubelet[2811]: I1105 16:02:20.872473 2811 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-11","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 16:02:20.875299 kubelet[2811]: I1105 16:02:20.875062 2811 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 16:02:20.875299 kubelet[2811]: I1105 16:02:20.875087 2811 container_manager_linux.go:306] "Creating device plugin manager" Nov 5 16:02:20.875299 kubelet[2811]: I1105 16:02:20.875235 2811 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 5 16:02:20.878499 kubelet[2811]: I1105 16:02:20.878439 2811 state_mem.go:36] "Initialized new in-memory state store" Nov 5 16:02:20.878733 kubelet[2811]: I1105 16:02:20.878694 2811 kubelet.go:475] "Attempting to sync node with API server" Nov 5 16:02:20.878733 kubelet[2811]: I1105 16:02:20.878717 2811 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 16:02:20.880931 kubelet[2811]: I1105 16:02:20.880901 2811 kubelet.go:387] "Adding apiserver pod source" Nov 5 16:02:20.881140 kubelet[2811]: I1105 16:02:20.881099 2811 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 16:02:20.887374 kubelet[2811]: E1105 16:02:20.880928 2811 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-11&limit=500&resourceVersion=0\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 16:02:20.888141 kubelet[2811]: I1105 16:02:20.888117 2811 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 16:02:20.893199 kubelet[2811]: I1105 16:02:20.893165 2811 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 16:02:20.893402 kubelet[2811]: I1105 16:02:20.893387 2811 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 5 16:02:20.897856 kubelet[2811]: E1105 16:02:20.897740 2811 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 16:02:20.900688 kubelet[2811]: W1105 16:02:20.900290 2811 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 16:02:20.905912 kubelet[2811]: I1105 16:02:20.905625 2811 server.go:1262] "Started kubelet" Nov 5 16:02:20.908772 kubelet[2811]: I1105 16:02:20.908727 2811 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 16:02:20.915823 kubelet[2811]: E1105 16:02:20.913075 2811 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.11:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-11.187527c16d8d3acd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-11,UID:ip-172-31-16-11,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-11,},FirstTimestamp:2025-11-05 16:02:20.905560781 +0000 UTC m=+0.461777563,LastTimestamp:2025-11-05 16:02:20.905560781 +0000 UTC m=+0.461777563,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-11,}" Nov 5 16:02:20.918704 kubelet[2811]: I1105 16:02:20.918513 2811 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 16:02:20.920990 kubelet[2811]: I1105 16:02:20.920932 2811 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 16:02:20.921110 kubelet[2811]: I1105 16:02:20.921055 2811 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 5 16:02:20.922291 kubelet[2811]: I1105 16:02:20.921370 2811 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 16:02:20.925338 kubelet[2811]: I1105 16:02:20.925310 2811 server.go:310] "Adding debug handlers to kubelet server" Nov 5 16:02:20.927995 kubelet[2811]: I1105 16:02:20.927524 2811 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 16:02:20.928822 kubelet[2811]: I1105 16:02:20.928792 2811 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 5 16:02:20.929343 kubelet[2811]: E1105 16:02:20.929319 2811 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-11\" not found" Nov 5 16:02:20.932961 kubelet[2811]: I1105 16:02:20.932939 2811 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 5 16:02:20.933554 kubelet[2811]: I1105 16:02:20.933467 2811 reconciler.go:29] "Reconciler: start to sync state" Nov 5 16:02:20.934164 kubelet[2811]: E1105 16:02:20.934132 2811 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 16:02:20.934715 kubelet[2811]: I1105 16:02:20.934688 2811 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 16:02:20.935263 kubelet[2811]: E1105 16:02:20.935227 2811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-11?timeout=10s\": dial tcp 172.31.16.11:6443: connect: connection refused" interval="200ms" Nov 5 16:02:20.938937 kubelet[2811]: I1105 16:02:20.938909 2811 factory.go:223] Registration of the containerd container factory successfully Nov 5 16:02:20.939128 kubelet[2811]: I1105 16:02:20.939115 2811 factory.go:223] Registration of the systemd container factory successfully Nov 5 16:02:20.961180 kubelet[2811]: I1105 16:02:20.961128 2811 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 5 16:02:20.963333 kubelet[2811]: I1105 16:02:20.963308 2811 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 5 16:02:20.963513 kubelet[2811]: I1105 16:02:20.963502 2811 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 5 16:02:20.963715 kubelet[2811]: I1105 16:02:20.963703 2811 kubelet.go:2427] "Starting kubelet main sync loop" Nov 5 16:02:20.963827 kubelet[2811]: E1105 16:02:20.963812 2811 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 16:02:20.981999 kubelet[2811]: E1105 16:02:20.981145 2811 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 16:02:20.993333 kubelet[2811]: I1105 16:02:20.993258 2811 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 16:02:20.993333 kubelet[2811]: I1105 16:02:20.993279 2811 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 16:02:20.993333 kubelet[2811]: I1105 16:02:20.993305 2811 state_mem.go:36] "Initialized new in-memory state store" Nov 5 16:02:20.996278 kubelet[2811]: I1105 16:02:20.996048 2811 policy_none.go:49] "None policy: Start" Nov 5 16:02:20.996278 kubelet[2811]: I1105 16:02:20.996073 2811 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 5 16:02:20.996278 kubelet[2811]: I1105 16:02:20.996084 2811 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 5 16:02:20.997774 kubelet[2811]: I1105 16:02:20.997751 2811 policy_none.go:47] "Start" Nov 5 16:02:21.005338 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 16:02:21.019533 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 16:02:21.024476 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 16:02:21.029637 kubelet[2811]: E1105 16:02:21.029591 2811 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-11\" not found" Nov 5 16:02:21.032444 kubelet[2811]: E1105 16:02:21.032407 2811 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 16:02:21.032668 kubelet[2811]: I1105 16:02:21.032651 2811 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 16:02:21.032748 kubelet[2811]: I1105 16:02:21.032670 2811 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 16:02:21.033366 kubelet[2811]: I1105 16:02:21.033292 2811 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 16:02:21.035453 kubelet[2811]: E1105 16:02:21.035409 2811 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 16:02:21.036097 kubelet[2811]: E1105 16:02:21.035968 2811 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-11\" not found" Nov 5 16:02:21.089762 systemd[1]: Created slice kubepods-burstable-pod689d5700958538211b9bcc4a393888ab.slice - libcontainer container kubepods-burstable-pod689d5700958538211b9bcc4a393888ab.slice. Nov 5 16:02:21.099078 kubelet[2811]: E1105 16:02:21.099051 2811 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-11\" not found" node="ip-172-31-16-11" Nov 5 16:02:21.105688 systemd[1]: Created slice kubepods-burstable-pod82df93299d77164f6f9ad1186fd2abb4.slice - libcontainer container kubepods-burstable-pod82df93299d77164f6f9ad1186fd2abb4.slice. Nov 5 16:02:21.109255 kubelet[2811]: E1105 16:02:21.109226 2811 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-11\" not found" node="ip-172-31-16-11" Nov 5 16:02:21.117264 systemd[1]: Created slice kubepods-burstable-podb4373c445a1244277e832f5b5247a200.slice - libcontainer container kubepods-burstable-podb4373c445a1244277e832f5b5247a200.slice. Nov 5 16:02:21.120250 kubelet[2811]: E1105 16:02:21.120219 2811 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-11\" not found" node="ip-172-31-16-11" Nov 5 16:02:21.134228 kubelet[2811]: I1105 16:02:21.134088 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b4373c445a1244277e832f5b5247a200-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-11\" (UID: \"b4373c445a1244277e832f5b5247a200\") " pod="kube-system/kube-scheduler-ip-172-31-16-11" Nov 5 16:02:21.134916 kubelet[2811]: I1105 16:02:21.134540 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/689d5700958538211b9bcc4a393888ab-ca-certs\") pod \"kube-apiserver-ip-172-31-16-11\" (UID: \"689d5700958538211b9bcc4a393888ab\") " pod="kube-system/kube-apiserver-ip-172-31-16-11" Nov 5 16:02:21.134916 kubelet[2811]: I1105 16:02:21.134565 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/689d5700958538211b9bcc4a393888ab-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-11\" (UID: \"689d5700958538211b9bcc4a393888ab\") " pod="kube-system/kube-apiserver-ip-172-31-16-11" Nov 5 16:02:21.134916 kubelet[2811]: I1105 16:02:21.134591 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/689d5700958538211b9bcc4a393888ab-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-11\" (UID: \"689d5700958538211b9bcc4a393888ab\") " pod="kube-system/kube-apiserver-ip-172-31-16-11" Nov 5 16:02:21.134916 kubelet[2811]: I1105 16:02:21.134607 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82df93299d77164f6f9ad1186fd2abb4-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-11\" (UID: \"82df93299d77164f6f9ad1186fd2abb4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-11" Nov 5 16:02:21.135504 kubelet[2811]: I1105 16:02:21.135474 2811 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-11" Nov 5 16:02:21.138821 kubelet[2811]: E1105 16:02:21.138776 2811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-11?timeout=10s\": dial tcp 172.31.16.11:6443: connect: connection refused" interval="400ms" Nov 5 16:02:21.139179 kubelet[2811]: E1105 16:02:21.139056 2811 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.11:6443/api/v1/nodes\": dial tcp 172.31.16.11:6443: connect: connection refused" node="ip-172-31-16-11" Nov 5 16:02:21.235065 kubelet[2811]: I1105 16:02:21.234817 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/82df93299d77164f6f9ad1186fd2abb4-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-11\" (UID: \"82df93299d77164f6f9ad1186fd2abb4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-11" Nov 5 16:02:21.235065 kubelet[2811]: I1105 16:02:21.234863 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82df93299d77164f6f9ad1186fd2abb4-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-11\" (UID: \"82df93299d77164f6f9ad1186fd2abb4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-11" Nov 5 16:02:21.235065 kubelet[2811]: I1105 16:02:21.234889 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/82df93299d77164f6f9ad1186fd2abb4-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-11\" (UID: \"82df93299d77164f6f9ad1186fd2abb4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-11" Nov 5 16:02:21.235065 kubelet[2811]: I1105 16:02:21.234905 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82df93299d77164f6f9ad1186fd2abb4-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-11\" (UID: \"82df93299d77164f6f9ad1186fd2abb4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-11" Nov 5 16:02:21.341741 kubelet[2811]: I1105 16:02:21.341634 2811 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-11" Nov 5 16:02:21.342113 kubelet[2811]: E1105 16:02:21.342079 2811 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.11:6443/api/v1/nodes\": dial tcp 172.31.16.11:6443: connect: connection refused" node="ip-172-31-16-11" Nov 5 16:02:21.402697 containerd[1899]: time="2025-11-05T16:02:21.402646111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-11,Uid:689d5700958538211b9bcc4a393888ab,Namespace:kube-system,Attempt:0,}" Nov 5 16:02:21.419408 containerd[1899]: time="2025-11-05T16:02:21.419350206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-11,Uid:82df93299d77164f6f9ad1186fd2abb4,Namespace:kube-system,Attempt:0,}" Nov 5 16:02:21.423373 containerd[1899]: time="2025-11-05T16:02:21.423275656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-11,Uid:b4373c445a1244277e832f5b5247a200,Namespace:kube-system,Attempt:0,}" Nov 5 16:02:21.539318 kubelet[2811]: E1105 16:02:21.539166 2811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-11?timeout=10s\": dial tcp 172.31.16.11:6443: connect: connection refused" interval="800ms" Nov 5 16:02:21.744689 kubelet[2811]: I1105 16:02:21.744655 2811 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-11" Nov 5 16:02:21.745184 kubelet[2811]: E1105 16:02:21.745032 2811 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.11:6443/api/v1/nodes\": dial tcp 172.31.16.11:6443: connect: connection refused" node="ip-172-31-16-11" Nov 5 16:02:21.783807 kubelet[2811]: E1105 16:02:21.783750 2811 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-11&limit=500&resourceVersion=0\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 16:02:21.863883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1721799691.mount: Deactivated successfully. Nov 5 16:02:21.869972 containerd[1899]: time="2025-11-05T16:02:21.869914154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 16:02:21.873883 containerd[1899]: time="2025-11-05T16:02:21.873817846Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 5 16:02:21.874995 containerd[1899]: time="2025-11-05T16:02:21.874946606Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 16:02:21.876437 containerd[1899]: time="2025-11-05T16:02:21.876387360Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 16:02:21.878220 containerd[1899]: time="2025-11-05T16:02:21.878172017Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 16:02:21.880128 containerd[1899]: time="2025-11-05T16:02:21.880084034Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 5 16:02:21.880876 containerd[1899]: time="2025-11-05T16:02:21.880742090Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 5 16:02:21.882042 containerd[1899]: time="2025-11-05T16:02:21.882009595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 16:02:21.882707 containerd[1899]: time="2025-11-05T16:02:21.882682701Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 477.174661ms" Nov 5 16:02:21.884656 containerd[1899]: time="2025-11-05T16:02:21.884625306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 459.240928ms" Nov 5 16:02:21.896261 containerd[1899]: time="2025-11-05T16:02:21.896210452Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 474.31238ms" Nov 5 16:02:21.961668 kubelet[2811]: E1105 16:02:21.961630 2811 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 16:02:22.052814 containerd[1899]: time="2025-11-05T16:02:22.052676038Z" level=info msg="connecting to shim a5f46ccfa9ce39752bd2f53d3b3f8d5be19bc280471db271658afc27027cb928" address="unix:///run/containerd/s/fe5f53508c91b257e7c79894d040264f24f545524d85eef1f311137fb788e19f" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:02:22.055723 containerd[1899]: time="2025-11-05T16:02:22.055679447Z" level=info msg="connecting to shim a23e41a7839367d38db9b7c2cef5d422ed2c5e0b182f42f7303f70dddb435098" address="unix:///run/containerd/s/1fcd1baf9256f150e1c1d0175bbd0437f6b700096669b5faec4c06d2f856682e" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:02:22.061857 containerd[1899]: time="2025-11-05T16:02:22.061807474Z" level=info msg="connecting to shim 7c9c2bad61f436d3312d01deaf646ba49bd8f5e5a40c39527ef00f5a800ccab3" address="unix:///run/containerd/s/5d1d6f80b1a94ac2051ebc500511e6c24751d79ff2d675ab60ca8bbc60118534" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:02:22.176254 systemd[1]: Started cri-containerd-a23e41a7839367d38db9b7c2cef5d422ed2c5e0b182f42f7303f70dddb435098.scope - libcontainer container a23e41a7839367d38db9b7c2cef5d422ed2c5e0b182f42f7303f70dddb435098. Nov 5 16:02:22.182321 systemd[1]: Started cri-containerd-7c9c2bad61f436d3312d01deaf646ba49bd8f5e5a40c39527ef00f5a800ccab3.scope - libcontainer container 7c9c2bad61f436d3312d01deaf646ba49bd8f5e5a40c39527ef00f5a800ccab3. Nov 5 16:02:22.198133 systemd[1]: Started cri-containerd-a5f46ccfa9ce39752bd2f53d3b3f8d5be19bc280471db271658afc27027cb928.scope - libcontainer container a5f46ccfa9ce39752bd2f53d3b3f8d5be19bc280471db271658afc27027cb928. Nov 5 16:02:22.289625 containerd[1899]: time="2025-11-05T16:02:22.289505164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-11,Uid:82df93299d77164f6f9ad1186fd2abb4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a23e41a7839367d38db9b7c2cef5d422ed2c5e0b182f42f7303f70dddb435098\"" Nov 5 16:02:22.311847 containerd[1899]: time="2025-11-05T16:02:22.311432543Z" level=info msg="CreateContainer within sandbox \"a23e41a7839367d38db9b7c2cef5d422ed2c5e0b182f42f7303f70dddb435098\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 16:02:22.312476 containerd[1899]: time="2025-11-05T16:02:22.312423033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-11,Uid:689d5700958538211b9bcc4a393888ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c9c2bad61f436d3312d01deaf646ba49bd8f5e5a40c39527ef00f5a800ccab3\"" Nov 5 16:02:22.315303 kubelet[2811]: E1105 16:02:22.315252 2811 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 16:02:22.317004 containerd[1899]: time="2025-11-05T16:02:22.316925426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-11,Uid:b4373c445a1244277e832f5b5247a200,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5f46ccfa9ce39752bd2f53d3b3f8d5be19bc280471db271658afc27027cb928\"" Nov 5 16:02:22.321050 containerd[1899]: time="2025-11-05T16:02:22.321008569Z" level=info msg="CreateContainer within sandbox \"7c9c2bad61f436d3312d01deaf646ba49bd8f5e5a40c39527ef00f5a800ccab3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 16:02:22.328394 containerd[1899]: time="2025-11-05T16:02:22.328340321Z" level=info msg="CreateContainer within sandbox \"a5f46ccfa9ce39752bd2f53d3b3f8d5be19bc280471db271658afc27027cb928\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 16:02:22.341497 kubelet[2811]: E1105 16:02:22.341440 2811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-11?timeout=10s\": dial tcp 172.31.16.11:6443: connect: connection refused" interval="1.6s" Nov 5 16:02:22.367011 containerd[1899]: time="2025-11-05T16:02:22.366392531Z" level=info msg="Container e536fb57a55270446c435d75d3430c97a4e1aa60795d371e469b1f46a8e7d3f8: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:02:22.367896 containerd[1899]: time="2025-11-05T16:02:22.367339176Z" level=info msg="Container 4ddeebe734a273a5c91c9a5b1281ce88b4fff36303263f935b1783e805f92a03: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:02:22.368895 containerd[1899]: time="2025-11-05T16:02:22.368862322Z" level=info msg="Container 928ba8274c4e395de990c716277c06de108aaee31eb0684c072546768865ff19: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:02:22.410576 containerd[1899]: time="2025-11-05T16:02:22.410513247Z" level=info msg="CreateContainer within sandbox \"a5f46ccfa9ce39752bd2f53d3b3f8d5be19bc280471db271658afc27027cb928\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e536fb57a55270446c435d75d3430c97a4e1aa60795d371e469b1f46a8e7d3f8\"" Nov 5 16:02:22.411943 containerd[1899]: time="2025-11-05T16:02:22.411555446Z" level=info msg="CreateContainer within sandbox \"a23e41a7839367d38db9b7c2cef5d422ed2c5e0b182f42f7303f70dddb435098\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4ddeebe734a273a5c91c9a5b1281ce88b4fff36303263f935b1783e805f92a03\"" Nov 5 16:02:22.412687 containerd[1899]: time="2025-11-05T16:02:22.412651820Z" level=info msg="StartContainer for \"4ddeebe734a273a5c91c9a5b1281ce88b4fff36303263f935b1783e805f92a03\"" Nov 5 16:02:22.414583 containerd[1899]: time="2025-11-05T16:02:22.414494674Z" level=info msg="connecting to shim 4ddeebe734a273a5c91c9a5b1281ce88b4fff36303263f935b1783e805f92a03" address="unix:///run/containerd/s/1fcd1baf9256f150e1c1d0175bbd0437f6b700096669b5faec4c06d2f856682e" protocol=ttrpc version=3 Nov 5 16:02:22.415061 containerd[1899]: time="2025-11-05T16:02:22.415035529Z" level=info msg="StartContainer for \"e536fb57a55270446c435d75d3430c97a4e1aa60795d371e469b1f46a8e7d3f8\"" Nov 5 16:02:22.417009 containerd[1899]: time="2025-11-05T16:02:22.416951408Z" level=info msg="CreateContainer within sandbox \"7c9c2bad61f436d3312d01deaf646ba49bd8f5e5a40c39527ef00f5a800ccab3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"928ba8274c4e395de990c716277c06de108aaee31eb0684c072546768865ff19\"" Nov 5 16:02:22.417990 containerd[1899]: time="2025-11-05T16:02:22.417940768Z" level=info msg="connecting to shim e536fb57a55270446c435d75d3430c97a4e1aa60795d371e469b1f46a8e7d3f8" address="unix:///run/containerd/s/fe5f53508c91b257e7c79894d040264f24f545524d85eef1f311137fb788e19f" protocol=ttrpc version=3 Nov 5 16:02:22.419015 containerd[1899]: time="2025-11-05T16:02:22.418689569Z" level=info msg="StartContainer for \"928ba8274c4e395de990c716277c06de108aaee31eb0684c072546768865ff19\"" Nov 5 16:02:22.419812 containerd[1899]: time="2025-11-05T16:02:22.419789758Z" level=info msg="connecting to shim 928ba8274c4e395de990c716277c06de108aaee31eb0684c072546768865ff19" address="unix:///run/containerd/s/5d1d6f80b1a94ac2051ebc500511e6c24751d79ff2d675ab60ca8bbc60118534" protocol=ttrpc version=3 Nov 5 16:02:22.455266 systemd[1]: Started cri-containerd-4ddeebe734a273a5c91c9a5b1281ce88b4fff36303263f935b1783e805f92a03.scope - libcontainer container 4ddeebe734a273a5c91c9a5b1281ce88b4fff36303263f935b1783e805f92a03. Nov 5 16:02:22.456854 systemd[1]: Started cri-containerd-e536fb57a55270446c435d75d3430c97a4e1aa60795d371e469b1f46a8e7d3f8.scope - libcontainer container e536fb57a55270446c435d75d3430c97a4e1aa60795d371e469b1f46a8e7d3f8. Nov 5 16:02:22.463047 systemd[1]: Started cri-containerd-928ba8274c4e395de990c716277c06de108aaee31eb0684c072546768865ff19.scope - libcontainer container 928ba8274c4e395de990c716277c06de108aaee31eb0684c072546768865ff19. Nov 5 16:02:22.509897 kubelet[2811]: E1105 16:02:22.509857 2811 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 16:02:22.550736 kubelet[2811]: I1105 16:02:22.550707 2811 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-11" Nov 5 16:02:22.552182 kubelet[2811]: E1105 16:02:22.552044 2811 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.11:6443/api/v1/nodes\": dial tcp 172.31.16.11:6443: connect: connection refused" node="ip-172-31-16-11" Nov 5 16:02:22.572717 containerd[1899]: time="2025-11-05T16:02:22.571452096Z" level=info msg="StartContainer for \"928ba8274c4e395de990c716277c06de108aaee31eb0684c072546768865ff19\" returns successfully" Nov 5 16:02:22.600629 containerd[1899]: time="2025-11-05T16:02:22.599507386Z" level=info msg="StartContainer for \"e536fb57a55270446c435d75d3430c97a4e1aa60795d371e469b1f46a8e7d3f8\" returns successfully" Nov 5 16:02:22.613180 containerd[1899]: time="2025-11-05T16:02:22.613135860Z" level=info msg="StartContainer for \"4ddeebe734a273a5c91c9a5b1281ce88b4fff36303263f935b1783e805f92a03\" returns successfully" Nov 5 16:02:22.917342 kubelet[2811]: E1105 16:02:22.917287 2811 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 16:02:23.010586 kubelet[2811]: E1105 16:02:23.010512 2811 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-11\" not found" node="ip-172-31-16-11" Nov 5 16:02:23.015700 kubelet[2811]: E1105 16:02:23.015342 2811 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-11\" not found" node="ip-172-31-16-11" Nov 5 16:02:23.020550 kubelet[2811]: E1105 16:02:23.020525 2811 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-11\" not found" node="ip-172-31-16-11" Nov 5 16:02:23.944994 kubelet[2811]: E1105 16:02:23.942884 2811 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-11?timeout=10s\": dial tcp 172.31.16.11:6443: connect: connection refused" interval="3.2s" Nov 5 16:02:24.023421 kubelet[2811]: E1105 16:02:24.023393 2811 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-11\" not found" node="ip-172-31-16-11" Nov 5 16:02:24.025030 kubelet[2811]: E1105 16:02:24.024582 2811 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-11\" not found" node="ip-172-31-16-11" Nov 5 16:02:24.155021 kubelet[2811]: I1105 16:02:24.154965 2811 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-11" Nov 5 16:02:24.155371 kubelet[2811]: E1105 16:02:24.155341 2811 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.11:6443/api/v1/nodes\": dial tcp 172.31.16.11:6443: connect: connection refused" node="ip-172-31-16-11" Nov 5 16:02:24.195105 kubelet[2811]: E1105 16:02:24.194971 2811 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 16:02:24.304105 kubelet[2811]: E1105 16:02:24.304062 2811 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-11&limit=500&resourceVersion=0\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 16:02:24.745803 kubelet[2811]: E1105 16:02:24.745758 2811 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 16:02:25.468747 kubelet[2811]: E1105 16:02:25.468106 2811 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.11:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-11.187527c16d8d3acd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-11,UID:ip-172-31-16-11,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-11,},FirstTimestamp:2025-11-05 16:02:20.905560781 +0000 UTC m=+0.461777563,LastTimestamp:2025-11-05 16:02:20.905560781 +0000 UTC m=+0.461777563,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-11,}" Nov 5 16:02:25.596818 kubelet[2811]: E1105 16:02:25.596752 2811 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 16:02:25.936307 kubelet[2811]: E1105 16:02:25.936254 2811 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-11\" not found" node="ip-172-31-16-11" Nov 5 16:02:27.358324 kubelet[2811]: I1105 16:02:27.358289 2811 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-11" Nov 5 16:02:28.296184 kubelet[2811]: E1105 16:02:28.295745 2811 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-11\" not found" node="ip-172-31-16-11" Nov 5 16:02:29.039046 kubelet[2811]: E1105 16:02:29.038988 2811 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-11\" not found" node="ip-172-31-16-11" Nov 5 16:02:29.248500 kubelet[2811]: I1105 16:02:29.248316 2811 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-11" Nov 5 16:02:29.248500 kubelet[2811]: E1105 16:02:29.248351 2811 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ip-172-31-16-11\": node \"ip-172-31-16-11\" not found" Nov 5 16:02:29.266126 kubelet[2811]: E1105 16:02:29.266095 2811 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-11\" not found" Nov 5 16:02:29.367083 kubelet[2811]: E1105 16:02:29.366648 2811 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-11\" not found" Nov 5 16:02:29.467061 kubelet[2811]: E1105 16:02:29.467018 2811 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-11\" not found" Nov 5 16:02:29.567995 kubelet[2811]: E1105 16:02:29.567943 2811 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-11\" not found" Nov 5 16:02:29.669067 kubelet[2811]: E1105 16:02:29.668970 2811 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-11\" not found" Nov 5 16:02:29.769789 kubelet[2811]: E1105 16:02:29.769744 2811 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-11\" not found" Nov 5 16:02:29.870700 kubelet[2811]: E1105 16:02:29.870628 2811 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-11\" not found" Nov 5 16:02:29.971821 kubelet[2811]: E1105 16:02:29.971433 2811 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-11\" not found" Nov 5 16:02:30.072107 kubelet[2811]: E1105 16:02:30.072070 2811 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-11\" not found" Nov 5 16:02:30.172309 kubelet[2811]: E1105 16:02:30.172228 2811 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-11\" not found" Nov 5 16:02:30.273358 kubelet[2811]: E1105 16:02:30.272971 2811 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-11\" not found" Nov 5 16:02:30.373362 kubelet[2811]: E1105 16:02:30.373282 2811 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-11\" not found" Nov 5 16:02:30.474155 kubelet[2811]: E1105 16:02:30.474100 2811 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-16-11\" not found" Nov 5 16:02:30.633241 kubelet[2811]: I1105 16:02:30.633197 2811 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-11" Nov 5 16:02:30.644740 kubelet[2811]: I1105 16:02:30.644693 2811 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-11" Nov 5 16:02:30.650192 kubelet[2811]: I1105 16:02:30.650147 2811 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-11" Nov 5 16:02:30.893803 kubelet[2811]: I1105 16:02:30.893341 2811 apiserver.go:52] "Watching apiserver" Nov 5 16:02:30.936359 kubelet[2811]: I1105 16:02:30.936324 2811 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 5 16:02:30.945119 systemd[1]: Reload requested from client PID 3092 ('systemctl') (unit session-7.scope)... Nov 5 16:02:30.945139 systemd[1]: Reloading... Nov 5 16:02:31.024439 kubelet[2811]: I1105 16:02:31.023934 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-11" podStartSLOduration=1.023911974 podStartE2EDuration="1.023911974s" podCreationTimestamp="2025-11-05 16:02:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:02:31.009316427 +0000 UTC m=+10.565533190" watchObservedRunningTime="2025-11-05 16:02:31.023911974 +0000 UTC m=+10.580128741" Nov 5 16:02:31.024439 kubelet[2811]: I1105 16:02:31.024126 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-11" podStartSLOduration=1.024114424 podStartE2EDuration="1.024114424s" podCreationTimestamp="2025-11-05 16:02:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:02:31.024111826 +0000 UTC m=+10.580328593" watchObservedRunningTime="2025-11-05 16:02:31.024114424 +0000 UTC m=+10.580331192" Nov 5 16:02:31.046085 kubelet[2811]: I1105 16:02:31.045931 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-11" podStartSLOduration=1.04589534 podStartE2EDuration="1.04589534s" podCreationTimestamp="2025-11-05 16:02:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:02:31.045658062 +0000 UTC m=+10.601874828" watchObservedRunningTime="2025-11-05 16:02:31.04589534 +0000 UTC m=+10.602112108" Nov 5 16:02:31.138013 zram_generator::config[3138]: No configuration found. Nov 5 16:02:31.419062 systemd[1]: Reloading finished in 473 ms. Nov 5 16:02:31.452829 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:02:31.462556 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 16:02:31.462779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:02:31.462830 systemd[1]: kubelet.service: Consumed 925ms CPU time, 122.2M memory peak. Nov 5 16:02:31.467307 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:02:31.790910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:02:31.805921 (kubelet)[3198]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 16:02:31.891085 kubelet[3198]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 16:02:31.891085 kubelet[3198]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 16:02:31.893689 kubelet[3198]: I1105 16:02:31.893590 3198 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 16:02:31.916769 kubelet[3198]: I1105 16:02:31.916647 3198 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 5 16:02:31.916769 kubelet[3198]: I1105 16:02:31.916676 3198 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 16:02:31.921391 kubelet[3198]: I1105 16:02:31.921337 3198 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 5 16:02:31.921391 kubelet[3198]: I1105 16:02:31.921391 3198 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 16:02:31.921808 kubelet[3198]: I1105 16:02:31.921662 3198 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 16:02:31.931885 kubelet[3198]: I1105 16:02:31.930221 3198 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 5 16:02:31.936933 kubelet[3198]: I1105 16:02:31.936893 3198 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 16:02:31.941537 kubelet[3198]: I1105 16:02:31.941472 3198 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 16:02:31.946704 kubelet[3198]: I1105 16:02:31.946669 3198 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 5 16:02:31.949329 kubelet[3198]: I1105 16:02:31.949074 3198 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 16:02:31.949662 kubelet[3198]: I1105 16:02:31.949486 3198 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-11","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 16:02:31.949795 kubelet[3198]: I1105 16:02:31.949784 3198 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 16:02:31.949841 kubelet[3198]: I1105 16:02:31.949836 3198 container_manager_linux.go:306] "Creating device plugin manager" Nov 5 16:02:31.949904 kubelet[3198]: I1105 16:02:31.949898 3198 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 5 16:02:31.950673 kubelet[3198]: I1105 16:02:31.950656 3198 state_mem.go:36] "Initialized new in-memory state store" Nov 5 16:02:31.950917 kubelet[3198]: I1105 16:02:31.950906 3198 kubelet.go:475] "Attempting to sync node with API server" Nov 5 16:02:31.951003 kubelet[3198]: I1105 16:02:31.950991 3198 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 16:02:31.951096 kubelet[3198]: I1105 16:02:31.951088 3198 kubelet.go:387] "Adding apiserver pod source" Nov 5 16:02:31.951152 kubelet[3198]: I1105 16:02:31.951146 3198 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 16:02:31.957181 kubelet[3198]: I1105 16:02:31.957140 3198 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 16:02:31.957775 kubelet[3198]: I1105 16:02:31.957758 3198 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 16:02:31.957869 kubelet[3198]: I1105 16:02:31.957791 3198 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 5 16:02:31.962300 kubelet[3198]: I1105 16:02:31.962273 3198 server.go:1262] "Started kubelet" Nov 5 16:02:31.978281 kubelet[3198]: I1105 16:02:31.978233 3198 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 16:02:31.978473 kubelet[3198]: I1105 16:02:31.978458 3198 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 5 16:02:31.978759 kubelet[3198]: I1105 16:02:31.978747 3198 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 16:02:31.979896 kubelet[3198]: I1105 16:02:31.979872 3198 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 16:02:31.985527 kubelet[3198]: I1105 16:02:31.985492 3198 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 16:02:31.986849 kubelet[3198]: I1105 16:02:31.986826 3198 server.go:310] "Adding debug handlers to kubelet server" Nov 5 16:02:31.988401 kubelet[3198]: I1105 16:02:31.988373 3198 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 16:02:31.991606 kubelet[3198]: I1105 16:02:31.991582 3198 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 5 16:02:31.991885 kubelet[3198]: I1105 16:02:31.991871 3198 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 5 16:02:31.992150 kubelet[3198]: I1105 16:02:31.992137 3198 reconciler.go:29] "Reconciler: start to sync state" Nov 5 16:02:31.996006 kubelet[3198]: I1105 16:02:31.995447 3198 factory.go:223] Registration of the systemd container factory successfully Nov 5 16:02:31.996292 kubelet[3198]: I1105 16:02:31.996265 3198 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 16:02:31.997721 kubelet[3198]: E1105 16:02:31.997697 3198 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 16:02:32.000026 kubelet[3198]: I1105 16:02:31.999969 3198 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 5 16:02:32.002008 kubelet[3198]: I1105 16:02:32.001395 3198 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 5 16:02:32.002008 kubelet[3198]: I1105 16:02:32.001417 3198 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 5 16:02:32.002008 kubelet[3198]: I1105 16:02:32.001444 3198 kubelet.go:2427] "Starting kubelet main sync loop" Nov 5 16:02:32.002203 kubelet[3198]: E1105 16:02:32.002044 3198 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 16:02:32.002353 kubelet[3198]: I1105 16:02:32.002338 3198 factory.go:223] Registration of the containerd container factory successfully Nov 5 16:02:32.063926 kubelet[3198]: I1105 16:02:32.063825 3198 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 16:02:32.064533 kubelet[3198]: I1105 16:02:32.064396 3198 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 16:02:32.065013 kubelet[3198]: I1105 16:02:32.064953 3198 state_mem.go:36] "Initialized new in-memory state store" Nov 5 16:02:32.066077 kubelet[3198]: I1105 16:02:32.065847 3198 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 16:02:32.066367 kubelet[3198]: I1105 16:02:32.066222 3198 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 16:02:32.066367 kubelet[3198]: I1105 16:02:32.066269 3198 policy_none.go:49] "None policy: Start" Nov 5 16:02:32.066708 kubelet[3198]: I1105 16:02:32.066554 3198 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 5 16:02:32.066708 kubelet[3198]: I1105 16:02:32.066577 3198 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 5 16:02:32.067315 kubelet[3198]: I1105 16:02:32.067284 3198 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 5 16:02:32.067315 kubelet[3198]: I1105 16:02:32.067302 3198 policy_none.go:47] "Start" Nov 5 16:02:32.075197 kubelet[3198]: E1105 16:02:32.075169 3198 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 16:02:32.075406 kubelet[3198]: I1105 16:02:32.075378 3198 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 16:02:32.075500 kubelet[3198]: I1105 16:02:32.075398 3198 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 16:02:32.078012 kubelet[3198]: I1105 16:02:32.077667 3198 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 16:02:32.082630 kubelet[3198]: E1105 16:02:32.082379 3198 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 16:02:32.102728 kubelet[3198]: I1105 16:02:32.102678 3198 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-11" Nov 5 16:02:32.105004 kubelet[3198]: I1105 16:02:32.103132 3198 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-11" Nov 5 16:02:32.105004 kubelet[3198]: I1105 16:02:32.103437 3198 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-11" Nov 5 16:02:32.112066 kubelet[3198]: E1105 16:02:32.111746 3198 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-11\" already exists" pod="kube-system/kube-scheduler-ip-172-31-16-11" Nov 5 16:02:32.113383 kubelet[3198]: E1105 16:02:32.113354 3198 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-11\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-11" Nov 5 16:02:32.113790 kubelet[3198]: E1105 16:02:32.113454 3198 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-11\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-16-11" Nov 5 16:02:32.192727 kubelet[3198]: I1105 16:02:32.192608 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/689d5700958538211b9bcc4a393888ab-ca-certs\") pod \"kube-apiserver-ip-172-31-16-11\" (UID: \"689d5700958538211b9bcc4a393888ab\") " pod="kube-system/kube-apiserver-ip-172-31-16-11" Nov 5 16:02:32.193302 kubelet[3198]: I1105 16:02:32.193282 3198 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-11" Nov 5 16:02:32.203750 kubelet[3198]: I1105 16:02:32.203690 3198 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-16-11" Nov 5 16:02:32.203889 kubelet[3198]: I1105 16:02:32.203825 3198 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-11" Nov 5 16:02:32.293780 kubelet[3198]: I1105 16:02:32.293471 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/689d5700958538211b9bcc4a393888ab-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-11\" (UID: \"689d5700958538211b9bcc4a393888ab\") " pod="kube-system/kube-apiserver-ip-172-31-16-11" Nov 5 16:02:32.293780 kubelet[3198]: I1105 16:02:32.293525 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82df93299d77164f6f9ad1186fd2abb4-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-11\" (UID: \"82df93299d77164f6f9ad1186fd2abb4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-11" Nov 5 16:02:32.293780 kubelet[3198]: I1105 16:02:32.293544 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/82df93299d77164f6f9ad1186fd2abb4-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-11\" (UID: \"82df93299d77164f6f9ad1186fd2abb4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-11" Nov 5 16:02:32.293780 kubelet[3198]: I1105 16:02:32.293558 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b4373c445a1244277e832f5b5247a200-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-11\" (UID: \"b4373c445a1244277e832f5b5247a200\") " pod="kube-system/kube-scheduler-ip-172-31-16-11" Nov 5 16:02:32.293780 kubelet[3198]: I1105 16:02:32.293620 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/82df93299d77164f6f9ad1186fd2abb4-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-11\" (UID: \"82df93299d77164f6f9ad1186fd2abb4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-11" Nov 5 16:02:32.294074 kubelet[3198]: I1105 16:02:32.293640 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82df93299d77164f6f9ad1186fd2abb4-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-11\" (UID: \"82df93299d77164f6f9ad1186fd2abb4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-11" Nov 5 16:02:32.294074 kubelet[3198]: I1105 16:02:32.293655 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82df93299d77164f6f9ad1186fd2abb4-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-11\" (UID: \"82df93299d77164f6f9ad1186fd2abb4\") " pod="kube-system/kube-controller-manager-ip-172-31-16-11" Nov 5 16:02:32.294074 kubelet[3198]: I1105 16:02:32.293681 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/689d5700958538211b9bcc4a393888ab-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-11\" (UID: \"689d5700958538211b9bcc4a393888ab\") " pod="kube-system/kube-apiserver-ip-172-31-16-11" Nov 5 16:02:32.715127 update_engine[1861]: I20251105 16:02:32.715029 1861 update_attempter.cc:509] Updating boot flags... Nov 5 16:02:32.960455 kubelet[3198]: I1105 16:02:32.959410 3198 apiserver.go:52] "Watching apiserver" Nov 5 16:02:33.107494 kubelet[3198]: I1105 16:02:33.107360 3198 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 5 16:02:37.397873 kubelet[3198]: I1105 16:02:37.397835 3198 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 16:02:37.398563 containerd[1899]: time="2025-11-05T16:02:37.398524908Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 16:02:37.399028 kubelet[3198]: I1105 16:02:37.399005 3198 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 16:02:37.606121 systemd[1]: Created slice kubepods-besteffort-pod24acf1a3_1915_4e89_8b8f_89bcbd421cb2.slice - libcontainer container kubepods-besteffort-pod24acf1a3_1915_4e89_8b8f_89bcbd421cb2.slice. Nov 5 16:02:37.734465 kubelet[3198]: I1105 16:02:37.734278 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj2wf\" (UniqueName: \"kubernetes.io/projected/24acf1a3-1915-4e89-8b8f-89bcbd421cb2-kube-api-access-sj2wf\") pod \"kube-proxy-fhxwj\" (UID: \"24acf1a3-1915-4e89-8b8f-89bcbd421cb2\") " pod="kube-system/kube-proxy-fhxwj" Nov 5 16:02:37.734465 kubelet[3198]: I1105 16:02:37.734319 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/24acf1a3-1915-4e89-8b8f-89bcbd421cb2-kube-proxy\") pod \"kube-proxy-fhxwj\" (UID: \"24acf1a3-1915-4e89-8b8f-89bcbd421cb2\") " pod="kube-system/kube-proxy-fhxwj" Nov 5 16:02:37.734465 kubelet[3198]: I1105 16:02:37.734343 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24acf1a3-1915-4e89-8b8f-89bcbd421cb2-xtables-lock\") pod \"kube-proxy-fhxwj\" (UID: \"24acf1a3-1915-4e89-8b8f-89bcbd421cb2\") " pod="kube-system/kube-proxy-fhxwj" Nov 5 16:02:37.734465 kubelet[3198]: I1105 16:02:37.734368 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24acf1a3-1915-4e89-8b8f-89bcbd421cb2-lib-modules\") pod \"kube-proxy-fhxwj\" (UID: \"24acf1a3-1915-4e89-8b8f-89bcbd421cb2\") " pod="kube-system/kube-proxy-fhxwj" Nov 5 16:02:37.843212 kubelet[3198]: E1105 16:02:37.843173 3198 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 5 16:02:37.843212 kubelet[3198]: E1105 16:02:37.843208 3198 projected.go:196] Error preparing data for projected volume kube-api-access-sj2wf for pod kube-system/kube-proxy-fhxwj: configmap "kube-root-ca.crt" not found Nov 5 16:02:37.847860 kubelet[3198]: E1105 16:02:37.847793 3198 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/24acf1a3-1915-4e89-8b8f-89bcbd421cb2-kube-api-access-sj2wf podName:24acf1a3-1915-4e89-8b8f-89bcbd421cb2 nodeName:}" failed. No retries permitted until 2025-11-05 16:02:38.343269831 +0000 UTC m=+6.529325618 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sj2wf" (UniqueName: "kubernetes.io/projected/24acf1a3-1915-4e89-8b8f-89bcbd421cb2-kube-api-access-sj2wf") pod "kube-proxy-fhxwj" (UID: "24acf1a3-1915-4e89-8b8f-89bcbd421cb2") : configmap "kube-root-ca.crt" not found Nov 5 16:02:38.519952 containerd[1899]: time="2025-11-05T16:02:38.519908296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fhxwj,Uid:24acf1a3-1915-4e89-8b8f-89bcbd421cb2,Namespace:kube-system,Attempt:0,}" Nov 5 16:02:38.547542 containerd[1899]: time="2025-11-05T16:02:38.547387504Z" level=info msg="connecting to shim 5712ab31fdd77ceef76bab9721de939399586a711629dfe8c09027e0cd7d0ec5" address="unix:///run/containerd/s/31718c18e311649618f1459b77c70ec300220df665357c93546a4cd63a589451" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:02:38.612269 systemd[1]: Started cri-containerd-5712ab31fdd77ceef76bab9721de939399586a711629dfe8c09027e0cd7d0ec5.scope - libcontainer container 5712ab31fdd77ceef76bab9721de939399586a711629dfe8c09027e0cd7d0ec5. Nov 5 16:02:38.629804 systemd[1]: Created slice kubepods-besteffort-pod59d40a32_3e99_4527_9cf8_2a3105968b6b.slice - libcontainer container kubepods-besteffort-pod59d40a32_3e99_4527_9cf8_2a3105968b6b.slice. Nov 5 16:02:38.696152 containerd[1899]: time="2025-11-05T16:02:38.695945857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fhxwj,Uid:24acf1a3-1915-4e89-8b8f-89bcbd421cb2,Namespace:kube-system,Attempt:0,} returns sandbox id \"5712ab31fdd77ceef76bab9721de939399586a711629dfe8c09027e0cd7d0ec5\"" Nov 5 16:02:38.705064 containerd[1899]: time="2025-11-05T16:02:38.704213122Z" level=info msg="CreateContainer within sandbox \"5712ab31fdd77ceef76bab9721de939399586a711629dfe8c09027e0cd7d0ec5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 16:02:38.741499 kubelet[3198]: I1105 16:02:38.741453 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/59d40a32-3e99-4527-9cf8-2a3105968b6b-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-8s7gb\" (UID: \"59d40a32-3e99-4527-9cf8-2a3105968b6b\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-8s7gb" Nov 5 16:02:38.742030 kubelet[3198]: I1105 16:02:38.741959 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7stlk\" (UniqueName: \"kubernetes.io/projected/59d40a32-3e99-4527-9cf8-2a3105968b6b-kube-api-access-7stlk\") pod \"tigera-operator-65cdcdfd6d-8s7gb\" (UID: \"59d40a32-3e99-4527-9cf8-2a3105968b6b\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-8s7gb" Nov 5 16:02:38.749207 containerd[1899]: time="2025-11-05T16:02:38.749121397Z" level=info msg="Container 9d14fe7b1f7462681f9fb71d97f8ebc38fa393a07c0a8ffb9321d7dee09a2267: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:02:38.750596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1158444648.mount: Deactivated successfully. Nov 5 16:02:38.763869 containerd[1899]: time="2025-11-05T16:02:38.763409430Z" level=info msg="CreateContainer within sandbox \"5712ab31fdd77ceef76bab9721de939399586a711629dfe8c09027e0cd7d0ec5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9d14fe7b1f7462681f9fb71d97f8ebc38fa393a07c0a8ffb9321d7dee09a2267\"" Nov 5 16:02:38.765146 containerd[1899]: time="2025-11-05T16:02:38.765100061Z" level=info msg="StartContainer for \"9d14fe7b1f7462681f9fb71d97f8ebc38fa393a07c0a8ffb9321d7dee09a2267\"" Nov 5 16:02:38.767870 containerd[1899]: time="2025-11-05T16:02:38.767821561Z" level=info msg="connecting to shim 9d14fe7b1f7462681f9fb71d97f8ebc38fa393a07c0a8ffb9321d7dee09a2267" address="unix:///run/containerd/s/31718c18e311649618f1459b77c70ec300220df665357c93546a4cd63a589451" protocol=ttrpc version=3 Nov 5 16:02:38.796392 systemd[1]: Started cri-containerd-9d14fe7b1f7462681f9fb71d97f8ebc38fa393a07c0a8ffb9321d7dee09a2267.scope - libcontainer container 9d14fe7b1f7462681f9fb71d97f8ebc38fa393a07c0a8ffb9321d7dee09a2267. Nov 5 16:02:38.844669 containerd[1899]: time="2025-11-05T16:02:38.844508298Z" level=info msg="StartContainer for \"9d14fe7b1f7462681f9fb71d97f8ebc38fa393a07c0a8ffb9321d7dee09a2267\" returns successfully" Nov 5 16:02:38.939732 containerd[1899]: time="2025-11-05T16:02:38.939484860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-8s7gb,Uid:59d40a32-3e99-4527-9cf8-2a3105968b6b,Namespace:tigera-operator,Attempt:0,}" Nov 5 16:02:38.969307 containerd[1899]: time="2025-11-05T16:02:38.969209210Z" level=info msg="connecting to shim 9626428a66de1d72b4b6c4a536740665cee7c4b677a248b54127ea37b4a5fe0f" address="unix:///run/containerd/s/a9ac3224e2b220bb5bd67ae6013fcebcee03a0013fdcaac7f26c163c5229950d" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:02:38.998227 systemd[1]: Started cri-containerd-9626428a66de1d72b4b6c4a536740665cee7c4b677a248b54127ea37b4a5fe0f.scope - libcontainer container 9626428a66de1d72b4b6c4a536740665cee7c4b677a248b54127ea37b4a5fe0f. Nov 5 16:02:39.057136 containerd[1899]: time="2025-11-05T16:02:39.056824511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-8s7gb,Uid:59d40a32-3e99-4527-9cf8-2a3105968b6b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9626428a66de1d72b4b6c4a536740665cee7c4b677a248b54127ea37b4a5fe0f\"" Nov 5 16:02:39.059607 containerd[1899]: time="2025-11-05T16:02:39.059432789Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 16:02:39.116835 kubelet[3198]: I1105 16:02:39.116647 3198 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fhxwj" podStartSLOduration=2.116624348 podStartE2EDuration="2.116624348s" podCreationTimestamp="2025-11-05 16:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:02:39.099725412 +0000 UTC m=+7.285781247" watchObservedRunningTime="2025-11-05 16:02:39.116624348 +0000 UTC m=+7.302680142" Nov 5 16:02:40.646326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount223796799.mount: Deactivated successfully. Nov 5 16:02:41.467286 containerd[1899]: time="2025-11-05T16:02:41.467215955Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:41.476510 containerd[1899]: time="2025-11-05T16:02:41.468743551Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 5 16:02:41.476672 containerd[1899]: time="2025-11-05T16:02:41.469884315Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:41.476960 containerd[1899]: time="2025-11-05T16:02:41.472839607Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.413350975s" Nov 5 16:02:41.476960 containerd[1899]: time="2025-11-05T16:02:41.476852934Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 5 16:02:41.477516 containerd[1899]: time="2025-11-05T16:02:41.477453543Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:02:41.484017 containerd[1899]: time="2025-11-05T16:02:41.483956995Z" level=info msg="CreateContainer within sandbox \"9626428a66de1d72b4b6c4a536740665cee7c4b677a248b54127ea37b4a5fe0f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 16:02:41.494328 containerd[1899]: time="2025-11-05T16:02:41.494139557Z" level=info msg="Container 18749c7f77223173cbb3d59752beb67552e877aa06ca9700bad89a841078514c: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:02:41.499136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2897093055.mount: Deactivated successfully. Nov 5 16:02:41.522452 containerd[1899]: time="2025-11-05T16:02:41.522378455Z" level=info msg="CreateContainer within sandbox \"9626428a66de1d72b4b6c4a536740665cee7c4b677a248b54127ea37b4a5fe0f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"18749c7f77223173cbb3d59752beb67552e877aa06ca9700bad89a841078514c\"" Nov 5 16:02:41.525831 containerd[1899]: time="2025-11-05T16:02:41.525754378Z" level=info msg="StartContainer for \"18749c7f77223173cbb3d59752beb67552e877aa06ca9700bad89a841078514c\"" Nov 5 16:02:41.543131 containerd[1899]: time="2025-11-05T16:02:41.543077731Z" level=info msg="connecting to shim 18749c7f77223173cbb3d59752beb67552e877aa06ca9700bad89a841078514c" address="unix:///run/containerd/s/a9ac3224e2b220bb5bd67ae6013fcebcee03a0013fdcaac7f26c163c5229950d" protocol=ttrpc version=3 Nov 5 16:02:41.578440 systemd[1]: Started cri-containerd-18749c7f77223173cbb3d59752beb67552e877aa06ca9700bad89a841078514c.scope - libcontainer container 18749c7f77223173cbb3d59752beb67552e877aa06ca9700bad89a841078514c. Nov 5 16:02:41.646628 containerd[1899]: time="2025-11-05T16:02:41.646578216Z" level=info msg="StartContainer for \"18749c7f77223173cbb3d59752beb67552e877aa06ca9700bad89a841078514c\" returns successfully" Nov 5 16:03:18.998084 sudo[2240]: pam_unix(sudo:session): session closed for user root Nov 5 16:03:19.025329 sshd-session[2236]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:19.026584 sshd[2239]: Connection closed by 139.178.68.195 port 56032 Nov 5 16:03:19.033168 systemd-logind[1855]: Session 7 logged out. Waiting for processes to exit. Nov 5 16:03:19.035125 systemd[1]: sshd@6-172.31.16.11:22-139.178.68.195:56032.service: Deactivated successfully. Nov 5 16:03:19.040547 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 16:03:19.040868 systemd[1]: session-7.scope: Consumed 5.759s CPU time, 155.2M memory peak. Nov 5 16:03:19.045457 systemd-logind[1855]: Removed session 7. Nov 5 16:03:25.408556 kubelet[3198]: I1105 16:03:25.408382 3198 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-8s7gb" podStartSLOduration=44.988510518 podStartE2EDuration="47.40835277s" podCreationTimestamp="2025-11-05 16:02:38 +0000 UTC" firstStartedPulling="2025-11-05 16:02:39.058630223 +0000 UTC m=+7.244686014" lastFinishedPulling="2025-11-05 16:02:41.478472494 +0000 UTC m=+9.664528266" observedRunningTime="2025-11-05 16:02:42.124302573 +0000 UTC m=+10.310358380" watchObservedRunningTime="2025-11-05 16:03:25.40835277 +0000 UTC m=+53.594408576" Nov 5 16:03:25.430398 systemd[1]: Created slice kubepods-besteffort-pod4b057cab_30a3_4c20_a729_f05758aecd4e.slice - libcontainer container kubepods-besteffort-pod4b057cab_30a3_4c20_a729_f05758aecd4e.slice. Nov 5 16:03:25.491859 kubelet[3198]: I1105 16:03:25.491819 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4b057cab-30a3-4c20-a729-f05758aecd4e-typha-certs\") pod \"calico-typha-594b5f5654-6zdtk\" (UID: \"4b057cab-30a3-4c20-a729-f05758aecd4e\") " pod="calico-system/calico-typha-594b5f5654-6zdtk" Nov 5 16:03:25.491859 kubelet[3198]: I1105 16:03:25.491883 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b057cab-30a3-4c20-a729-f05758aecd4e-tigera-ca-bundle\") pod \"calico-typha-594b5f5654-6zdtk\" (UID: \"4b057cab-30a3-4c20-a729-f05758aecd4e\") " pod="calico-system/calico-typha-594b5f5654-6zdtk" Nov 5 16:03:25.492127 kubelet[3198]: I1105 16:03:25.491918 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45z8t\" (UniqueName: \"kubernetes.io/projected/4b057cab-30a3-4c20-a729-f05758aecd4e-kube-api-access-45z8t\") pod \"calico-typha-594b5f5654-6zdtk\" (UID: \"4b057cab-30a3-4c20-a729-f05758aecd4e\") " pod="calico-system/calico-typha-594b5f5654-6zdtk" Nov 5 16:03:25.740520 containerd[1899]: time="2025-11-05T16:03:25.740408277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-594b5f5654-6zdtk,Uid:4b057cab-30a3-4c20-a729-f05758aecd4e,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:25.745271 systemd[1]: Created slice kubepods-besteffort-pod42a8ebc1_0b91_4078_a48f_6580d418deb9.slice - libcontainer container kubepods-besteffort-pod42a8ebc1_0b91_4078_a48f_6580d418deb9.slice. Nov 5 16:03:25.791277 containerd[1899]: time="2025-11-05T16:03:25.791224853Z" level=info msg="connecting to shim 54b8ee21be79c41d34db038011910c2338d006dffcf8e0946522ac7dc9eecf11" address="unix:///run/containerd/s/2e2b6a91298983e70adfb41957fed30c0e1e1c8743e1ee65a9ad12e8204c12ab" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:25.793646 kubelet[3198]: I1105 16:03:25.793607 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42a8ebc1-0b91-4078-a48f-6580d418deb9-xtables-lock\") pod \"calico-node-kn7b4\" (UID: \"42a8ebc1-0b91-4078-a48f-6580d418deb9\") " pod="calico-system/calico-node-kn7b4" Nov 5 16:03:25.795433 kubelet[3198]: I1105 16:03:25.794086 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42a8ebc1-0b91-4078-a48f-6580d418deb9-tigera-ca-bundle\") pod \"calico-node-kn7b4\" (UID: \"42a8ebc1-0b91-4078-a48f-6580d418deb9\") " pod="calico-system/calico-node-kn7b4" Nov 5 16:03:25.795433 kubelet[3198]: I1105 16:03:25.795019 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/42a8ebc1-0b91-4078-a48f-6580d418deb9-cni-log-dir\") pod \"calico-node-kn7b4\" (UID: \"42a8ebc1-0b91-4078-a48f-6580d418deb9\") " pod="calico-system/calico-node-kn7b4" Nov 5 16:03:25.795433 kubelet[3198]: I1105 16:03:25.795061 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/42a8ebc1-0b91-4078-a48f-6580d418deb9-cni-net-dir\") pod \"calico-node-kn7b4\" (UID: \"42a8ebc1-0b91-4078-a48f-6580d418deb9\") " pod="calico-system/calico-node-kn7b4" Nov 5 16:03:25.795433 kubelet[3198]: I1105 16:03:25.795092 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht58d\" (UniqueName: \"kubernetes.io/projected/42a8ebc1-0b91-4078-a48f-6580d418deb9-kube-api-access-ht58d\") pod \"calico-node-kn7b4\" (UID: \"42a8ebc1-0b91-4078-a48f-6580d418deb9\") " pod="calico-system/calico-node-kn7b4" Nov 5 16:03:25.795433 kubelet[3198]: I1105 16:03:25.795123 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/42a8ebc1-0b91-4078-a48f-6580d418deb9-flexvol-driver-host\") pod \"calico-node-kn7b4\" (UID: \"42a8ebc1-0b91-4078-a48f-6580d418deb9\") " pod="calico-system/calico-node-kn7b4" Nov 5 16:03:25.795691 kubelet[3198]: I1105 16:03:25.795147 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/42a8ebc1-0b91-4078-a48f-6580d418deb9-var-lib-calico\") pod \"calico-node-kn7b4\" (UID: \"42a8ebc1-0b91-4078-a48f-6580d418deb9\") " pod="calico-system/calico-node-kn7b4" Nov 5 16:03:25.795691 kubelet[3198]: I1105 16:03:25.795172 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/42a8ebc1-0b91-4078-a48f-6580d418deb9-node-certs\") pod \"calico-node-kn7b4\" (UID: \"42a8ebc1-0b91-4078-a48f-6580d418deb9\") " pod="calico-system/calico-node-kn7b4" Nov 5 16:03:25.795691 kubelet[3198]: I1105 16:03:25.795195 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/42a8ebc1-0b91-4078-a48f-6580d418deb9-policysync\") pod \"calico-node-kn7b4\" (UID: \"42a8ebc1-0b91-4078-a48f-6580d418deb9\") " pod="calico-system/calico-node-kn7b4" Nov 5 16:03:25.795691 kubelet[3198]: I1105 16:03:25.795215 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/42a8ebc1-0b91-4078-a48f-6580d418deb9-var-run-calico\") pod \"calico-node-kn7b4\" (UID: \"42a8ebc1-0b91-4078-a48f-6580d418deb9\") " pod="calico-system/calico-node-kn7b4" Nov 5 16:03:25.795691 kubelet[3198]: I1105 16:03:25.795240 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/42a8ebc1-0b91-4078-a48f-6580d418deb9-cni-bin-dir\") pod \"calico-node-kn7b4\" (UID: \"42a8ebc1-0b91-4078-a48f-6580d418deb9\") " pod="calico-system/calico-node-kn7b4" Nov 5 16:03:25.795842 kubelet[3198]: I1105 16:03:25.795283 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42a8ebc1-0b91-4078-a48f-6580d418deb9-lib-modules\") pod \"calico-node-kn7b4\" (UID: \"42a8ebc1-0b91-4078-a48f-6580d418deb9\") " pod="calico-system/calico-node-kn7b4" Nov 5 16:03:25.842209 systemd[1]: Started cri-containerd-54b8ee21be79c41d34db038011910c2338d006dffcf8e0946522ac7dc9eecf11.scope - libcontainer container 54b8ee21be79c41d34db038011910c2338d006dffcf8e0946522ac7dc9eecf11. Nov 5 16:03:25.905136 kubelet[3198]: E1105 16:03:25.905072 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:25.907097 kubelet[3198]: W1105 16:03:25.905115 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:25.907764 kubelet[3198]: E1105 16:03:25.907071 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:25.912429 kubelet[3198]: E1105 16:03:25.912403 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:25.912624 kubelet[3198]: W1105 16:03:25.912604 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:25.912821 kubelet[3198]: E1105 16:03:25.912801 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:25.937776 kubelet[3198]: E1105 16:03:25.937752 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:25.938361 kubelet[3198]: W1105 16:03:25.937897 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:25.938361 kubelet[3198]: E1105 16:03:25.937924 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:25.958022 kubelet[3198]: E1105 16:03:25.957651 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7k4x5" podUID="d0a5c89c-b602-442e-811b-c3720b9add41" Nov 5 16:03:25.973000 containerd[1899]: time="2025-11-05T16:03:25.971632958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-594b5f5654-6zdtk,Uid:4b057cab-30a3-4c20-a729-f05758aecd4e,Namespace:calico-system,Attempt:0,} returns sandbox id \"54b8ee21be79c41d34db038011910c2338d006dffcf8e0946522ac7dc9eecf11\"" Nov 5 16:03:25.976347 containerd[1899]: time="2025-11-05T16:03:25.976300602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 16:03:26.056857 kubelet[3198]: E1105 16:03:26.056587 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.056857 kubelet[3198]: W1105 16:03:26.056614 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.056857 kubelet[3198]: E1105 16:03:26.056636 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.057344 kubelet[3198]: E1105 16:03:26.057111 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.057344 kubelet[3198]: W1105 16:03:26.057212 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.057344 kubelet[3198]: E1105 16:03:26.057227 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.057594 kubelet[3198]: E1105 16:03:26.057584 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.057665 kubelet[3198]: W1105 16:03:26.057656 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.057720 kubelet[3198]: E1105 16:03:26.057710 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.057990 kubelet[3198]: E1105 16:03:26.057951 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.057990 kubelet[3198]: W1105 16:03:26.057961 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.058161 kubelet[3198]: E1105 16:03:26.057970 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.058309 kubelet[3198]: E1105 16:03:26.058301 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.058429 kubelet[3198]: W1105 16:03:26.058354 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.058429 kubelet[3198]: E1105 16:03:26.058365 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.058662 kubelet[3198]: E1105 16:03:26.058584 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.058662 kubelet[3198]: W1105 16:03:26.058593 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.058662 kubelet[3198]: E1105 16:03:26.058600 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.058791 kubelet[3198]: E1105 16:03:26.058784 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.059134 kubelet[3198]: W1105 16:03:26.058831 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.059134 kubelet[3198]: E1105 16:03:26.058841 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.059764 kubelet[3198]: E1105 16:03:26.059515 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.059764 kubelet[3198]: W1105 16:03:26.059527 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.059764 kubelet[3198]: E1105 16:03:26.059542 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.060312 kubelet[3198]: E1105 16:03:26.060241 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.060312 kubelet[3198]: W1105 16:03:26.060263 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.060312 kubelet[3198]: E1105 16:03:26.060278 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.060510 kubelet[3198]: E1105 16:03:26.060445 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.060510 kubelet[3198]: W1105 16:03:26.060461 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.060510 kubelet[3198]: E1105 16:03:26.060470 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.060880 kubelet[3198]: E1105 16:03:26.060843 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.060880 kubelet[3198]: W1105 16:03:26.060860 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.060880 kubelet[3198]: E1105 16:03:26.060870 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.061300 kubelet[3198]: E1105 16:03:26.061222 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.061300 kubelet[3198]: W1105 16:03:26.061231 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.061300 kubelet[3198]: E1105 16:03:26.061241 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.061950 kubelet[3198]: E1105 16:03:26.061707 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.061950 kubelet[3198]: W1105 16:03:26.061719 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.061950 kubelet[3198]: E1105 16:03:26.061729 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.062241 kubelet[3198]: E1105 16:03:26.062183 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.062410 containerd[1899]: time="2025-11-05T16:03:26.062386358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kn7b4,Uid:42a8ebc1-0b91-4078-a48f-6580d418deb9,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:26.063040 kubelet[3198]: W1105 16:03:26.063017 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.063102 kubelet[3198]: E1105 16:03:26.063044 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.063664 kubelet[3198]: E1105 16:03:26.063610 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.063664 kubelet[3198]: W1105 16:03:26.063646 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.063664 kubelet[3198]: E1105 16:03:26.063658 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.063891 kubelet[3198]: E1105 16:03:26.063873 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.063891 kubelet[3198]: W1105 16:03:26.063883 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.063891 kubelet[3198]: E1105 16:03:26.063892 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.064455 kubelet[3198]: E1105 16:03:26.064440 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.064455 kubelet[3198]: W1105 16:03:26.064455 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.064525 kubelet[3198]: E1105 16:03:26.064471 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.064722 kubelet[3198]: E1105 16:03:26.064701 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.064722 kubelet[3198]: W1105 16:03:26.064714 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.064722 kubelet[3198]: E1105 16:03:26.064723 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.065342 kubelet[3198]: E1105 16:03:26.065234 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.065342 kubelet[3198]: W1105 16:03:26.065247 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.065342 kubelet[3198]: E1105 16:03:26.065258 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.065846 kubelet[3198]: E1105 16:03:26.065452 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.065846 kubelet[3198]: W1105 16:03:26.065460 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.065846 kubelet[3198]: E1105 16:03:26.065469 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.095170 containerd[1899]: time="2025-11-05T16:03:26.094886712Z" level=info msg="connecting to shim 5b5cee7c3d5369d1a42eada0d5095779fd3ec16d5d539b05d24189c97c6d2bcb" address="unix:///run/containerd/s/ed4a6ce81a868c833e7536a5c2abb9dc50dbe758e6c11fd0fce108264870aaba" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:26.097493 kubelet[3198]: E1105 16:03:26.097438 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.097970 kubelet[3198]: W1105 16:03:26.097546 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.097970 kubelet[3198]: E1105 16:03:26.097565 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.098454 kubelet[3198]: I1105 16:03:26.098167 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d0a5c89c-b602-442e-811b-c3720b9add41-socket-dir\") pod \"csi-node-driver-7k4x5\" (UID: \"d0a5c89c-b602-442e-811b-c3720b9add41\") " pod="calico-system/csi-node-driver-7k4x5" Nov 5 16:03:26.099182 kubelet[3198]: E1105 16:03:26.098810 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.099182 kubelet[3198]: W1105 16:03:26.098825 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.099182 kubelet[3198]: E1105 16:03:26.098844 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.099182 kubelet[3198]: I1105 16:03:26.098867 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkz6d\" (UniqueName: \"kubernetes.io/projected/d0a5c89c-b602-442e-811b-c3720b9add41-kube-api-access-lkz6d\") pod \"csi-node-driver-7k4x5\" (UID: \"d0a5c89c-b602-442e-811b-c3720b9add41\") " pod="calico-system/csi-node-driver-7k4x5" Nov 5 16:03:26.099606 kubelet[3198]: E1105 16:03:26.099585 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.099824 kubelet[3198]: W1105 16:03:26.099668 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.099824 kubelet[3198]: E1105 16:03:26.099685 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.099824 kubelet[3198]: I1105 16:03:26.099707 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d0a5c89c-b602-442e-811b-c3720b9add41-kubelet-dir\") pod \"csi-node-driver-7k4x5\" (UID: \"d0a5c89c-b602-442e-811b-c3720b9add41\") " pod="calico-system/csi-node-driver-7k4x5" Nov 5 16:03:26.100442 kubelet[3198]: E1105 16:03:26.100248 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.100442 kubelet[3198]: W1105 16:03:26.100437 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.100522 kubelet[3198]: E1105 16:03:26.100450 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.100746 kubelet[3198]: E1105 16:03:26.100732 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.100872 kubelet[3198]: W1105 16:03:26.100745 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.100872 kubelet[3198]: E1105 16:03:26.100849 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.101208 kubelet[3198]: E1105 16:03:26.101154 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.101303 kubelet[3198]: W1105 16:03:26.101165 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.101341 kubelet[3198]: E1105 16:03:26.101307 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.102074 kubelet[3198]: E1105 16:03:26.101994 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.102074 kubelet[3198]: W1105 16:03:26.102005 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.102074 kubelet[3198]: E1105 16:03:26.102015 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.102372 kubelet[3198]: I1105 16:03:26.102171 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d0a5c89c-b602-442e-811b-c3720b9add41-registration-dir\") pod \"csi-node-driver-7k4x5\" (UID: \"d0a5c89c-b602-442e-811b-c3720b9add41\") " pod="calico-system/csi-node-driver-7k4x5" Nov 5 16:03:26.102932 kubelet[3198]: E1105 16:03:26.102916 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.102932 kubelet[3198]: W1105 16:03:26.102931 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.103133 kubelet[3198]: E1105 16:03:26.102941 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.104580 kubelet[3198]: E1105 16:03:26.104508 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.104580 kubelet[3198]: W1105 16:03:26.104522 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.104580 kubelet[3198]: E1105 16:03:26.104533 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.105545 kubelet[3198]: E1105 16:03:26.105516 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.105545 kubelet[3198]: W1105 16:03:26.105532 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.105545 kubelet[3198]: E1105 16:03:26.105542 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.106722 kubelet[3198]: E1105 16:03:26.106670 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.106722 kubelet[3198]: W1105 16:03:26.106682 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.106722 kubelet[3198]: E1105 16:03:26.106698 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.107144 kubelet[3198]: E1105 16:03:26.107117 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.107144 kubelet[3198]: W1105 16:03:26.107132 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.107144 kubelet[3198]: E1105 16:03:26.107142 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.108034 kubelet[3198]: I1105 16:03:26.107565 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d0a5c89c-b602-442e-811b-c3720b9add41-varrun\") pod \"csi-node-driver-7k4x5\" (UID: \"d0a5c89c-b602-442e-811b-c3720b9add41\") " pod="calico-system/csi-node-driver-7k4x5" Nov 5 16:03:26.108408 kubelet[3198]: E1105 16:03:26.108223 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.109261 kubelet[3198]: W1105 16:03:26.109222 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.109261 kubelet[3198]: E1105 16:03:26.109257 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.109680 kubelet[3198]: E1105 16:03:26.109662 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.110042 kubelet[3198]: W1105 16:03:26.110024 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.110089 kubelet[3198]: E1105 16:03:26.110045 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.110252 kubelet[3198]: E1105 16:03:26.110238 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.110252 kubelet[3198]: W1105 16:03:26.110251 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.110368 kubelet[3198]: E1105 16:03:26.110260 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.128725 systemd[1]: Started cri-containerd-5b5cee7c3d5369d1a42eada0d5095779fd3ec16d5d539b05d24189c97c6d2bcb.scope - libcontainer container 5b5cee7c3d5369d1a42eada0d5095779fd3ec16d5d539b05d24189c97c6d2bcb. Nov 5 16:03:26.175296 containerd[1899]: time="2025-11-05T16:03:26.174817009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kn7b4,Uid:42a8ebc1-0b91-4078-a48f-6580d418deb9,Namespace:calico-system,Attempt:0,} returns sandbox id \"5b5cee7c3d5369d1a42eada0d5095779fd3ec16d5d539b05d24189c97c6d2bcb\"" Nov 5 16:03:26.213136 kubelet[3198]: E1105 16:03:26.212711 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.213136 kubelet[3198]: W1105 16:03:26.212745 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.213136 kubelet[3198]: E1105 16:03:26.212773 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.214087 kubelet[3198]: E1105 16:03:26.213572 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.214087 kubelet[3198]: W1105 16:03:26.213590 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.214087 kubelet[3198]: E1105 16:03:26.213609 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.216860 kubelet[3198]: E1105 16:03:26.216833 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.216860 kubelet[3198]: W1105 16:03:26.216859 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.217097 kubelet[3198]: E1105 16:03:26.216883 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.217557 kubelet[3198]: E1105 16:03:26.217532 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.217660 kubelet[3198]: W1105 16:03:26.217650 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.217704 kubelet[3198]: E1105 16:03:26.217670 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.218240 kubelet[3198]: E1105 16:03:26.218206 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.218454 kubelet[3198]: W1105 16:03:26.218222 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.218454 kubelet[3198]: E1105 16:03:26.218344 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.221194 kubelet[3198]: E1105 16:03:26.221125 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.221194 kubelet[3198]: W1105 16:03:26.221148 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.221194 kubelet[3198]: E1105 16:03:26.221192 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.223754 kubelet[3198]: E1105 16:03:26.223719 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.223754 kubelet[3198]: W1105 16:03:26.223748 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.223940 kubelet[3198]: E1105 16:03:26.223771 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.224356 kubelet[3198]: E1105 16:03:26.224231 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.224356 kubelet[3198]: W1105 16:03:26.224246 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.224356 kubelet[3198]: E1105 16:03:26.224264 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.224642 kubelet[3198]: E1105 16:03:26.224600 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.224642 kubelet[3198]: W1105 16:03:26.224623 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.224642 kubelet[3198]: E1105 16:03:26.224641 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.224947 kubelet[3198]: E1105 16:03:26.224919 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.224947 kubelet[3198]: W1105 16:03:26.224935 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.225665 kubelet[3198]: E1105 16:03:26.224948 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.225665 kubelet[3198]: E1105 16:03:26.225264 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.225665 kubelet[3198]: W1105 16:03:26.225284 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.225665 kubelet[3198]: E1105 16:03:26.225297 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.225665 kubelet[3198]: E1105 16:03:26.225593 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.225665 kubelet[3198]: W1105 16:03:26.225604 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.225665 kubelet[3198]: E1105 16:03:26.225616 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.227516 kubelet[3198]: E1105 16:03:26.227492 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.227516 kubelet[3198]: W1105 16:03:26.227515 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.227653 kubelet[3198]: E1105 16:03:26.227540 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.227933 kubelet[3198]: E1105 16:03:26.227860 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.227933 kubelet[3198]: W1105 16:03:26.227875 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.227933 kubelet[3198]: E1105 16:03:26.227890 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.228191 kubelet[3198]: E1105 16:03:26.228163 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.228191 kubelet[3198]: W1105 16:03:26.228180 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.228297 kubelet[3198]: E1105 16:03:26.228193 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.229930 kubelet[3198]: E1105 16:03:26.229901 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.230181 kubelet[3198]: W1105 16:03:26.229941 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.230181 kubelet[3198]: E1105 16:03:26.229963 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.231765 kubelet[3198]: E1105 16:03:26.231675 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.231765 kubelet[3198]: W1105 16:03:26.231694 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.231765 kubelet[3198]: E1105 16:03:26.231716 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.232877 kubelet[3198]: E1105 16:03:26.232677 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.232877 kubelet[3198]: W1105 16:03:26.232696 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.232877 kubelet[3198]: E1105 16:03:26.232712 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.233824 kubelet[3198]: E1105 16:03:26.233654 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.233824 kubelet[3198]: W1105 16:03:26.233668 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.233824 kubelet[3198]: E1105 16:03:26.233683 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.235053 kubelet[3198]: E1105 16:03:26.234865 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.235053 kubelet[3198]: W1105 16:03:26.234882 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.235053 kubelet[3198]: E1105 16:03:26.234897 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.236014 kubelet[3198]: E1105 16:03:26.235998 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.236741 kubelet[3198]: W1105 16:03:26.236106 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.236741 kubelet[3198]: E1105 16:03:26.236126 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.236936 kubelet[3198]: E1105 16:03:26.236922 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.237164 kubelet[3198]: W1105 16:03:26.237081 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.237382 kubelet[3198]: E1105 16:03:26.237358 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.238319 kubelet[3198]: E1105 16:03:26.238249 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.238589 kubelet[3198]: W1105 16:03:26.238550 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.238589 kubelet[3198]: E1105 16:03:26.238573 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.239143 kubelet[3198]: E1105 16:03:26.239129 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.239411 kubelet[3198]: W1105 16:03:26.239210 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.239411 kubelet[3198]: E1105 16:03:26.239224 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.240096 kubelet[3198]: E1105 16:03:26.240053 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.240096 kubelet[3198]: W1105 16:03:26.240068 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.240415 kubelet[3198]: E1105 16:03:26.240184 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:26.256199 kubelet[3198]: E1105 16:03:26.256063 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:26.256199 kubelet[3198]: W1105 16:03:26.256093 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:26.256199 kubelet[3198]: E1105 16:03:26.256121 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:27.419317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3692637163.mount: Deactivated successfully. Nov 5 16:03:28.002581 kubelet[3198]: E1105 16:03:28.002433 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7k4x5" podUID="d0a5c89c-b602-442e-811b-c3720b9add41" Nov 5 16:03:28.546024 containerd[1899]: time="2025-11-05T16:03:28.545959293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:28.547019 containerd[1899]: time="2025-11-05T16:03:28.546816445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 5 16:03:28.548096 containerd[1899]: time="2025-11-05T16:03:28.548063248Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:28.550366 containerd[1899]: time="2025-11-05T16:03:28.550328391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:28.551075 containerd[1899]: time="2025-11-05T16:03:28.551045395Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.574682768s" Nov 5 16:03:28.551212 containerd[1899]: time="2025-11-05T16:03:28.551191936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 5 16:03:28.553148 containerd[1899]: time="2025-11-05T16:03:28.553007703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 16:03:28.579649 containerd[1899]: time="2025-11-05T16:03:28.579291238Z" level=info msg="CreateContainer within sandbox \"54b8ee21be79c41d34db038011910c2338d006dffcf8e0946522ac7dc9eecf11\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 16:03:28.591017 containerd[1899]: time="2025-11-05T16:03:28.587585834Z" level=info msg="Container 0f0221df7d5e1d5b6fa9afefec8259bbeaa2930cfca6b39375685f9e93a9fa9f: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:03:28.602600 containerd[1899]: time="2025-11-05T16:03:28.602548071Z" level=info msg="CreateContainer within sandbox \"54b8ee21be79c41d34db038011910c2338d006dffcf8e0946522ac7dc9eecf11\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0f0221df7d5e1d5b6fa9afefec8259bbeaa2930cfca6b39375685f9e93a9fa9f\"" Nov 5 16:03:28.603867 containerd[1899]: time="2025-11-05T16:03:28.603535854Z" level=info msg="StartContainer for \"0f0221df7d5e1d5b6fa9afefec8259bbeaa2930cfca6b39375685f9e93a9fa9f\"" Nov 5 16:03:28.605690 containerd[1899]: time="2025-11-05T16:03:28.605652912Z" level=info msg="connecting to shim 0f0221df7d5e1d5b6fa9afefec8259bbeaa2930cfca6b39375685f9e93a9fa9f" address="unix:///run/containerd/s/2e2b6a91298983e70adfb41957fed30c0e1e1c8743e1ee65a9ad12e8204c12ab" protocol=ttrpc version=3 Nov 5 16:03:28.667261 systemd[1]: Started cri-containerd-0f0221df7d5e1d5b6fa9afefec8259bbeaa2930cfca6b39375685f9e93a9fa9f.scope - libcontainer container 0f0221df7d5e1d5b6fa9afefec8259bbeaa2930cfca6b39375685f9e93a9fa9f. Nov 5 16:03:28.727535 containerd[1899]: time="2025-11-05T16:03:28.727489585Z" level=info msg="StartContainer for \"0f0221df7d5e1d5b6fa9afefec8259bbeaa2930cfca6b39375685f9e93a9fa9f\" returns successfully" Nov 5 16:03:29.341564 kubelet[3198]: I1105 16:03:29.341229 3198 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-594b5f5654-6zdtk" podStartSLOduration=1.764201576 podStartE2EDuration="4.341209093s" podCreationTimestamp="2025-11-05 16:03:25 +0000 UTC" firstStartedPulling="2025-11-05 16:03:25.97541931 +0000 UTC m=+54.161475090" lastFinishedPulling="2025-11-05 16:03:28.552426817 +0000 UTC m=+56.738482607" observedRunningTime="2025-11-05 16:03:29.325334808 +0000 UTC m=+57.511390611" watchObservedRunningTime="2025-11-05 16:03:29.341209093 +0000 UTC m=+57.527264884" Nov 5 16:03:29.389754 kubelet[3198]: E1105 16:03:29.389704 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.389754 kubelet[3198]: W1105 16:03:29.389735 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.389754 kubelet[3198]: E1105 16:03:29.389757 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.414666 kubelet[3198]: E1105 16:03:29.389955 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.414666 kubelet[3198]: W1105 16:03:29.389962 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.414666 kubelet[3198]: E1105 16:03:29.389972 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.414666 kubelet[3198]: E1105 16:03:29.390240 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.414666 kubelet[3198]: W1105 16:03:29.390248 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.414666 kubelet[3198]: E1105 16:03:29.390257 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.414666 kubelet[3198]: E1105 16:03:29.390623 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.414666 kubelet[3198]: W1105 16:03:29.390638 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.414666 kubelet[3198]: E1105 16:03:29.390653 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.414666 kubelet[3198]: E1105 16:03:29.390911 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.415007 kubelet[3198]: W1105 16:03:29.390930 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.415007 kubelet[3198]: E1105 16:03:29.390946 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.415007 kubelet[3198]: E1105 16:03:29.391210 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.415007 kubelet[3198]: W1105 16:03:29.391219 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.415007 kubelet[3198]: E1105 16:03:29.391230 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.415007 kubelet[3198]: E1105 16:03:29.391619 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.415007 kubelet[3198]: W1105 16:03:29.391629 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.415007 kubelet[3198]: E1105 16:03:29.391641 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.415007 kubelet[3198]: E1105 16:03:29.391855 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.415007 kubelet[3198]: W1105 16:03:29.391862 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.415288 kubelet[3198]: E1105 16:03:29.391871 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.415288 kubelet[3198]: E1105 16:03:29.392156 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.415288 kubelet[3198]: W1105 16:03:29.392166 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.415288 kubelet[3198]: E1105 16:03:29.392175 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.415288 kubelet[3198]: E1105 16:03:29.392364 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.415288 kubelet[3198]: W1105 16:03:29.392374 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.415288 kubelet[3198]: E1105 16:03:29.392383 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.415288 kubelet[3198]: E1105 16:03:29.392568 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.415288 kubelet[3198]: W1105 16:03:29.392576 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.415288 kubelet[3198]: E1105 16:03:29.392584 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.416272 kubelet[3198]: E1105 16:03:29.392775 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.416272 kubelet[3198]: W1105 16:03:29.392782 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.416272 kubelet[3198]: E1105 16:03:29.392789 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.416272 kubelet[3198]: E1105 16:03:29.393019 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.416272 kubelet[3198]: W1105 16:03:29.393029 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.416272 kubelet[3198]: E1105 16:03:29.393040 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.416272 kubelet[3198]: E1105 16:03:29.393218 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.416272 kubelet[3198]: W1105 16:03:29.393226 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.416272 kubelet[3198]: E1105 16:03:29.393234 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.416272 kubelet[3198]: E1105 16:03:29.393416 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.416526 kubelet[3198]: W1105 16:03:29.393437 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.416526 kubelet[3198]: E1105 16:03:29.393448 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.444423 kubelet[3198]: E1105 16:03:29.444388 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.444423 kubelet[3198]: W1105 16:03:29.444413 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.444630 kubelet[3198]: E1105 16:03:29.444434 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.444687 kubelet[3198]: E1105 16:03:29.444672 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.444687 kubelet[3198]: W1105 16:03:29.444682 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.444885 kubelet[3198]: E1105 16:03:29.444692 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.445019 kubelet[3198]: E1105 16:03:29.444973 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.445019 kubelet[3198]: W1105 16:03:29.445014 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.445019 kubelet[3198]: E1105 16:03:29.445027 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.445236 kubelet[3198]: E1105 16:03:29.445221 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.445236 kubelet[3198]: W1105 16:03:29.445232 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.445308 kubelet[3198]: E1105 16:03:29.445241 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.445474 kubelet[3198]: E1105 16:03:29.445450 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.445474 kubelet[3198]: W1105 16:03:29.445467 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.445587 kubelet[3198]: E1105 16:03:29.445480 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.445743 kubelet[3198]: E1105 16:03:29.445707 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.445743 kubelet[3198]: W1105 16:03:29.445740 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.445743 kubelet[3198]: E1105 16:03:29.445749 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.446124 kubelet[3198]: E1105 16:03:29.445889 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.446124 kubelet[3198]: W1105 16:03:29.445898 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.446124 kubelet[3198]: E1105 16:03:29.445905 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.446366 kubelet[3198]: E1105 16:03:29.446349 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.446366 kubelet[3198]: W1105 16:03:29.446362 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.446433 kubelet[3198]: E1105 16:03:29.446375 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.446627 kubelet[3198]: E1105 16:03:29.446607 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.446627 kubelet[3198]: W1105 16:03:29.446622 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.446725 kubelet[3198]: E1105 16:03:29.446634 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.447015 kubelet[3198]: E1105 16:03:29.446895 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.447015 kubelet[3198]: W1105 16:03:29.446910 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.447015 kubelet[3198]: E1105 16:03:29.446922 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.447505 kubelet[3198]: E1105 16:03:29.447491 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.447614 kubelet[3198]: W1105 16:03:29.447577 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.447614 kubelet[3198]: E1105 16:03:29.447595 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.447914 kubelet[3198]: E1105 16:03:29.447897 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.447914 kubelet[3198]: W1105 16:03:29.447910 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.448024 kubelet[3198]: E1105 16:03:29.447921 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.448137 kubelet[3198]: E1105 16:03:29.448124 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.448137 kubelet[3198]: W1105 16:03:29.448134 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.448195 kubelet[3198]: E1105 16:03:29.448142 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.448689 kubelet[3198]: E1105 16:03:29.448399 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.448689 kubelet[3198]: W1105 16:03:29.448406 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.448689 kubelet[3198]: E1105 16:03:29.448425 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.448689 kubelet[3198]: E1105 16:03:29.448606 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.448689 kubelet[3198]: W1105 16:03:29.448612 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.448689 kubelet[3198]: E1105 16:03:29.448620 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.449013 kubelet[3198]: E1105 16:03:29.448791 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.449013 kubelet[3198]: W1105 16:03:29.448797 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.449013 kubelet[3198]: E1105 16:03:29.448804 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.449255 kubelet[3198]: E1105 16:03:29.449235 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.449255 kubelet[3198]: W1105 16:03:29.449248 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.449352 kubelet[3198]: E1105 16:03:29.449260 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.449635 kubelet[3198]: E1105 16:03:29.449461 3198 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 16:03:29.449635 kubelet[3198]: W1105 16:03:29.449469 3198 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 16:03:29.449635 kubelet[3198]: E1105 16:03:29.449480 3198 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 16:03:29.930442 containerd[1899]: time="2025-11-05T16:03:29.930388213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:29.931415 containerd[1899]: time="2025-11-05T16:03:29.931158490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 5 16:03:29.933030 containerd[1899]: time="2025-11-05T16:03:29.932239837Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:29.934616 containerd[1899]: time="2025-11-05T16:03:29.934569689Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:29.935129 containerd[1899]: time="2025-11-05T16:03:29.935101874Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.382053036s" Nov 5 16:03:29.935497 containerd[1899]: time="2025-11-05T16:03:29.935228872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 5 16:03:29.939965 containerd[1899]: time="2025-11-05T16:03:29.939906035Z" level=info msg="CreateContainer within sandbox \"5b5cee7c3d5369d1a42eada0d5095779fd3ec16d5d539b05d24189c97c6d2bcb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 16:03:29.957164 containerd[1899]: time="2025-11-05T16:03:29.957111729Z" level=info msg="Container e64135cc4f571f9f7436d168d7284885c0c9130b5dd56c087a67f9a5a3e58068: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:03:29.965883 containerd[1899]: time="2025-11-05T16:03:29.965837853Z" level=info msg="CreateContainer within sandbox \"5b5cee7c3d5369d1a42eada0d5095779fd3ec16d5d539b05d24189c97c6d2bcb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e64135cc4f571f9f7436d168d7284885c0c9130b5dd56c087a67f9a5a3e58068\"" Nov 5 16:03:29.967489 containerd[1899]: time="2025-11-05T16:03:29.966501698Z" level=info msg="StartContainer for \"e64135cc4f571f9f7436d168d7284885c0c9130b5dd56c087a67f9a5a3e58068\"" Nov 5 16:03:29.970487 containerd[1899]: time="2025-11-05T16:03:29.970449023Z" level=info msg="connecting to shim e64135cc4f571f9f7436d168d7284885c0c9130b5dd56c087a67f9a5a3e58068" address="unix:///run/containerd/s/ed4a6ce81a868c833e7536a5c2abb9dc50dbe758e6c11fd0fce108264870aaba" protocol=ttrpc version=3 Nov 5 16:03:30.002447 kubelet[3198]: E1105 16:03:30.002378 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7k4x5" podUID="d0a5c89c-b602-442e-811b-c3720b9add41" Nov 5 16:03:30.003856 systemd[1]: Started cri-containerd-e64135cc4f571f9f7436d168d7284885c0c9130b5dd56c087a67f9a5a3e58068.scope - libcontainer container e64135cc4f571f9f7436d168d7284885c0c9130b5dd56c087a67f9a5a3e58068. Nov 5 16:03:30.058513 containerd[1899]: time="2025-11-05T16:03:30.058468726Z" level=info msg="StartContainer for \"e64135cc4f571f9f7436d168d7284885c0c9130b5dd56c087a67f9a5a3e58068\" returns successfully" Nov 5 16:03:30.072716 systemd[1]: cri-containerd-e64135cc4f571f9f7436d168d7284885c0c9130b5dd56c087a67f9a5a3e58068.scope: Deactivated successfully. Nov 5 16:03:30.073132 systemd[1]: cri-containerd-e64135cc4f571f9f7436d168d7284885c0c9130b5dd56c087a67f9a5a3e58068.scope: Consumed 34ms CPU time, 6.2M memory peak, 3.7M written to disk. Nov 5 16:03:30.108845 containerd[1899]: time="2025-11-05T16:03:30.108786638Z" level=info msg="received exit event container_id:\"e64135cc4f571f9f7436d168d7284885c0c9130b5dd56c087a67f9a5a3e58068\" id:\"e64135cc4f571f9f7436d168d7284885c0c9130b5dd56c087a67f9a5a3e58068\" pid:4070 exited_at:{seconds:1762358610 nanos:75610793}" Nov 5 16:03:30.144661 containerd[1899]: time="2025-11-05T16:03:30.144604627Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e64135cc4f571f9f7436d168d7284885c0c9130b5dd56c087a67f9a5a3e58068\" id:\"e64135cc4f571f9f7436d168d7284885c0c9130b5dd56c087a67f9a5a3e58068\" pid:4070 exited_at:{seconds:1762358610 nanos:75610793}" Nov 5 16:03:30.165804 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e64135cc4f571f9f7436d168d7284885c0c9130b5dd56c087a67f9a5a3e58068-rootfs.mount: Deactivated successfully. Nov 5 16:03:31.323267 containerd[1899]: time="2025-11-05T16:03:31.323087817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 16:03:32.004833 kubelet[3198]: E1105 16:03:32.003740 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7k4x5" podUID="d0a5c89c-b602-442e-811b-c3720b9add41" Nov 5 16:03:34.004496 kubelet[3198]: E1105 16:03:34.004370 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7k4x5" podUID="d0a5c89c-b602-442e-811b-c3720b9add41" Nov 5 16:03:35.964279 containerd[1899]: time="2025-11-05T16:03:35.963207403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:35.965173 containerd[1899]: time="2025-11-05T16:03:35.964832953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 5 16:03:35.991631 containerd[1899]: time="2025-11-05T16:03:35.965888064Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:35.991631 containerd[1899]: time="2025-11-05T16:03:35.969722442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:35.991631 containerd[1899]: time="2025-11-05T16:03:35.970404269Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.647187592s" Nov 5 16:03:35.991631 containerd[1899]: time="2025-11-05T16:03:35.970440074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 5 16:03:35.993651 containerd[1899]: time="2025-11-05T16:03:35.993599239Z" level=info msg="CreateContainer within sandbox \"5b5cee7c3d5369d1a42eada0d5095779fd3ec16d5d539b05d24189c97c6d2bcb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 16:03:36.004562 kubelet[3198]: E1105 16:03:36.004507 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7k4x5" podUID="d0a5c89c-b602-442e-811b-c3720b9add41" Nov 5 16:03:36.029175 containerd[1899]: time="2025-11-05T16:03:36.029117752Z" level=info msg="Container 562c5e880a08ec922535591c3f695bee007d8df34f770af04ee3cc02c5b59323: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:03:36.036616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1784569364.mount: Deactivated successfully. Nov 5 16:03:36.078086 containerd[1899]: time="2025-11-05T16:03:36.078027584Z" level=info msg="CreateContainer within sandbox \"5b5cee7c3d5369d1a42eada0d5095779fd3ec16d5d539b05d24189c97c6d2bcb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"562c5e880a08ec922535591c3f695bee007d8df34f770af04ee3cc02c5b59323\"" Nov 5 16:03:36.079076 containerd[1899]: time="2025-11-05T16:03:36.078813436Z" level=info msg="StartContainer for \"562c5e880a08ec922535591c3f695bee007d8df34f770af04ee3cc02c5b59323\"" Nov 5 16:03:36.081002 containerd[1899]: time="2025-11-05T16:03:36.080887607Z" level=info msg="connecting to shim 562c5e880a08ec922535591c3f695bee007d8df34f770af04ee3cc02c5b59323" address="unix:///run/containerd/s/ed4a6ce81a868c833e7536a5c2abb9dc50dbe758e6c11fd0fce108264870aaba" protocol=ttrpc version=3 Nov 5 16:03:36.149253 systemd[1]: Started cri-containerd-562c5e880a08ec922535591c3f695bee007d8df34f770af04ee3cc02c5b59323.scope - libcontainer container 562c5e880a08ec922535591c3f695bee007d8df34f770af04ee3cc02c5b59323. Nov 5 16:03:36.244945 containerd[1899]: time="2025-11-05T16:03:36.244783838Z" level=info msg="StartContainer for \"562c5e880a08ec922535591c3f695bee007d8df34f770af04ee3cc02c5b59323\" returns successfully" Nov 5 16:03:37.546541 systemd[1]: cri-containerd-562c5e880a08ec922535591c3f695bee007d8df34f770af04ee3cc02c5b59323.scope: Deactivated successfully. Nov 5 16:03:37.547250 systemd[1]: cri-containerd-562c5e880a08ec922535591c3f695bee007d8df34f770af04ee3cc02c5b59323.scope: Consumed 625ms CPU time, 166.9M memory peak, 7.2M read from disk, 171.3M written to disk. Nov 5 16:03:37.662327 kubelet[3198]: I1105 16:03:37.661902 3198 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 5 16:03:37.676854 containerd[1899]: time="2025-11-05T16:03:37.676185455Z" level=info msg="received exit event container_id:\"562c5e880a08ec922535591c3f695bee007d8df34f770af04ee3cc02c5b59323\" id:\"562c5e880a08ec922535591c3f695bee007d8df34f770af04ee3cc02c5b59323\" pid:4130 exited_at:{seconds:1762358617 nanos:675762513}" Nov 5 16:03:37.677341 containerd[1899]: time="2025-11-05T16:03:37.676944043Z" level=info msg="TaskExit event in podsandbox handler container_id:\"562c5e880a08ec922535591c3f695bee007d8df34f770af04ee3cc02c5b59323\" id:\"562c5e880a08ec922535591c3f695bee007d8df34f770af04ee3cc02c5b59323\" pid:4130 exited_at:{seconds:1762358617 nanos:675762513}" Nov 5 16:03:37.735656 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-562c5e880a08ec922535591c3f695bee007d8df34f770af04ee3cc02c5b59323-rootfs.mount: Deactivated successfully. Nov 5 16:03:37.786358 systemd[1]: Created slice kubepods-burstable-pod094f6189_9d15_415e_a528_9777b0761bec.slice - libcontainer container kubepods-burstable-pod094f6189_9d15_415e_a528_9777b0761bec.slice. Nov 5 16:03:37.802028 systemd[1]: Created slice kubepods-burstable-poda38961a6_3ae9_4766_af33_07fe9a74faa6.slice - libcontainer container kubepods-burstable-poda38961a6_3ae9_4766_af33_07fe9a74faa6.slice. Nov 5 16:03:37.816602 systemd[1]: Created slice kubepods-besteffort-pod5b76ecda_67c8_4ccb_b2a9_6e4178612c50.slice - libcontainer container kubepods-besteffort-pod5b76ecda_67c8_4ccb_b2a9_6e4178612c50.slice. Nov 5 16:03:37.842741 systemd[1]: Created slice kubepods-besteffort-pod96835183_cb2e_4158_994a_2b18537288b4.slice - libcontainer container kubepods-besteffort-pod96835183_cb2e_4158_994a_2b18537288b4.slice. Nov 5 16:03:37.866038 systemd[1]: Created slice kubepods-besteffort-pod436b2852_bb09_4690_8210_c17e2fe57e96.slice - libcontainer container kubepods-besteffort-pod436b2852_bb09_4690_8210_c17e2fe57e96.slice. Nov 5 16:03:37.874205 systemd[1]: Created slice kubepods-besteffort-podc259a7b3_0c1e_4695_b558_e42d28fb4911.slice - libcontainer container kubepods-besteffort-podc259a7b3_0c1e_4695_b558_e42d28fb4911.slice. Nov 5 16:03:37.883150 systemd[1]: Created slice kubepods-besteffort-pod7aee59c4_6ad2_4a22_8442_b9f44431ab0e.slice - libcontainer container kubepods-besteffort-pod7aee59c4_6ad2_4a22_8442_b9f44431ab0e.slice. Nov 5 16:03:37.924049 kubelet[3198]: I1105 16:03:37.923557 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbzs5\" (UniqueName: \"kubernetes.io/projected/436b2852-bb09-4690-8210-c17e2fe57e96-kube-api-access-nbzs5\") pod \"calico-apiserver-8fffdb464-q5zql\" (UID: \"436b2852-bb09-4690-8210-c17e2fe57e96\") " pod="calico-apiserver/calico-apiserver-8fffdb464-q5zql" Nov 5 16:03:37.924049 kubelet[3198]: I1105 16:03:37.923636 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzc5f\" (UniqueName: \"kubernetes.io/projected/a38961a6-3ae9-4766-af33-07fe9a74faa6-kube-api-access-lzc5f\") pod \"coredns-66bc5c9577-vw92k\" (UID: \"a38961a6-3ae9-4766-af33-07fe9a74faa6\") " pod="kube-system/coredns-66bc5c9577-vw92k" Nov 5 16:03:37.924049 kubelet[3198]: I1105 16:03:37.923662 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c259a7b3-0c1e-4695-b558-e42d28fb4911-config\") pod \"goldmane-7c778bb748-qgjqk\" (UID: \"c259a7b3-0c1e-4695-b558-e42d28fb4911\") " pod="calico-system/goldmane-7c778bb748-qgjqk" Nov 5 16:03:37.924049 kubelet[3198]: I1105 16:03:37.923685 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7aee59c4-6ad2-4a22-8442-b9f44431ab0e-whisker-backend-key-pair\") pod \"whisker-6bb97b9bc-lxdn8\" (UID: \"7aee59c4-6ad2-4a22-8442-b9f44431ab0e\") " pod="calico-system/whisker-6bb97b9bc-lxdn8" Nov 5 16:03:37.924049 kubelet[3198]: I1105 16:03:37.923732 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/96835183-cb2e-4158-994a-2b18537288b4-calico-apiserver-certs\") pod \"calico-apiserver-8fffdb464-mjcqs\" (UID: \"96835183-cb2e-4158-994a-2b18537288b4\") " pod="calico-apiserver/calico-apiserver-8fffdb464-mjcqs" Nov 5 16:03:37.924466 kubelet[3198]: I1105 16:03:37.923755 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkcqz\" (UniqueName: \"kubernetes.io/projected/c259a7b3-0c1e-4695-b558-e42d28fb4911-kube-api-access-dkcqz\") pod \"goldmane-7c778bb748-qgjqk\" (UID: \"c259a7b3-0c1e-4695-b558-e42d28fb4911\") " pod="calico-system/goldmane-7c778bb748-qgjqk" Nov 5 16:03:37.924466 kubelet[3198]: I1105 16:03:37.923778 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/094f6189-9d15-415e-a528-9777b0761bec-config-volume\") pod \"coredns-66bc5c9577-glcrv\" (UID: \"094f6189-9d15-415e-a528-9777b0761bec\") " pod="kube-system/coredns-66bc5c9577-glcrv" Nov 5 16:03:37.924466 kubelet[3198]: I1105 16:03:37.923807 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c259a7b3-0c1e-4695-b558-e42d28fb4911-goldmane-key-pair\") pod \"goldmane-7c778bb748-qgjqk\" (UID: \"c259a7b3-0c1e-4695-b558-e42d28fb4911\") " pod="calico-system/goldmane-7c778bb748-qgjqk" Nov 5 16:03:37.924466 kubelet[3198]: I1105 16:03:37.923829 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlc72\" (UniqueName: \"kubernetes.io/projected/7aee59c4-6ad2-4a22-8442-b9f44431ab0e-kube-api-access-dlc72\") pod \"whisker-6bb97b9bc-lxdn8\" (UID: \"7aee59c4-6ad2-4a22-8442-b9f44431ab0e\") " pod="calico-system/whisker-6bb97b9bc-lxdn8" Nov 5 16:03:37.924466 kubelet[3198]: I1105 16:03:37.923875 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b76ecda-67c8-4ccb-b2a9-6e4178612c50-tigera-ca-bundle\") pod \"calico-kube-controllers-5f9c9c664f-fhtxd\" (UID: \"5b76ecda-67c8-4ccb-b2a9-6e4178612c50\") " pod="calico-system/calico-kube-controllers-5f9c9c664f-fhtxd" Nov 5 16:03:37.924686 kubelet[3198]: I1105 16:03:37.923898 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7aee59c4-6ad2-4a22-8442-b9f44431ab0e-whisker-ca-bundle\") pod \"whisker-6bb97b9bc-lxdn8\" (UID: \"7aee59c4-6ad2-4a22-8442-b9f44431ab0e\") " pod="calico-system/whisker-6bb97b9bc-lxdn8" Nov 5 16:03:37.924686 kubelet[3198]: I1105 16:03:37.923947 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a38961a6-3ae9-4766-af33-07fe9a74faa6-config-volume\") pod \"coredns-66bc5c9577-vw92k\" (UID: \"a38961a6-3ae9-4766-af33-07fe9a74faa6\") " pod="kube-system/coredns-66bc5c9577-vw92k" Nov 5 16:03:37.924686 kubelet[3198]: I1105 16:03:37.924025 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmbcg\" (UniqueName: \"kubernetes.io/projected/96835183-cb2e-4158-994a-2b18537288b4-kube-api-access-rmbcg\") pod \"calico-apiserver-8fffdb464-mjcqs\" (UID: \"96835183-cb2e-4158-994a-2b18537288b4\") " pod="calico-apiserver/calico-apiserver-8fffdb464-mjcqs" Nov 5 16:03:37.924686 kubelet[3198]: I1105 16:03:37.924061 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c259a7b3-0c1e-4695-b558-e42d28fb4911-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-qgjqk\" (UID: \"c259a7b3-0c1e-4695-b558-e42d28fb4911\") " pod="calico-system/goldmane-7c778bb748-qgjqk" Nov 5 16:03:37.924686 kubelet[3198]: I1105 16:03:37.924084 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn725\" (UniqueName: \"kubernetes.io/projected/094f6189-9d15-415e-a528-9777b0761bec-kube-api-access-dn725\") pod \"coredns-66bc5c9577-glcrv\" (UID: \"094f6189-9d15-415e-a528-9777b0761bec\") " pod="kube-system/coredns-66bc5c9577-glcrv" Nov 5 16:03:37.925584 kubelet[3198]: I1105 16:03:37.924105 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks48d\" (UniqueName: \"kubernetes.io/projected/5b76ecda-67c8-4ccb-b2a9-6e4178612c50-kube-api-access-ks48d\") pod \"calico-kube-controllers-5f9c9c664f-fhtxd\" (UID: \"5b76ecda-67c8-4ccb-b2a9-6e4178612c50\") " pod="calico-system/calico-kube-controllers-5f9c9c664f-fhtxd" Nov 5 16:03:37.925584 kubelet[3198]: I1105 16:03:37.924127 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/436b2852-bb09-4690-8210-c17e2fe57e96-calico-apiserver-certs\") pod \"calico-apiserver-8fffdb464-q5zql\" (UID: \"436b2852-bb09-4690-8210-c17e2fe57e96\") " pod="calico-apiserver/calico-apiserver-8fffdb464-q5zql" Nov 5 16:03:38.017427 systemd[1]: Created slice kubepods-besteffort-podd0a5c89c_b602_442e_811b_c3720b9add41.slice - libcontainer container kubepods-besteffort-podd0a5c89c_b602_442e_811b_c3720b9add41.slice. Nov 5 16:03:38.023807 containerd[1899]: time="2025-11-05T16:03:38.023764420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7k4x5,Uid:d0a5c89c-b602-442e-811b-c3720b9add41,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:38.169602 containerd[1899]: time="2025-11-05T16:03:38.168889697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8fffdb464-mjcqs,Uid:96835183-cb2e-4158-994a-2b18537288b4,Namespace:calico-apiserver,Attempt:0,}" Nov 5 16:03:38.186065 containerd[1899]: time="2025-11-05T16:03:38.186024514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-qgjqk,Uid:c259a7b3-0c1e-4695-b558-e42d28fb4911,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:38.187550 containerd[1899]: time="2025-11-05T16:03:38.187251901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8fffdb464-q5zql,Uid:436b2852-bb09-4690-8210-c17e2fe57e96,Namespace:calico-apiserver,Attempt:0,}" Nov 5 16:03:38.192321 containerd[1899]: time="2025-11-05T16:03:38.192287956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bb97b9bc-lxdn8,Uid:7aee59c4-6ad2-4a22-8442-b9f44431ab0e,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:38.368021 containerd[1899]: time="2025-11-05T16:03:38.367952595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 16:03:38.402664 containerd[1899]: time="2025-11-05T16:03:38.402569992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-glcrv,Uid:094f6189-9d15-415e-a528-9777b0761bec,Namespace:kube-system,Attempt:0,}" Nov 5 16:03:38.408676 containerd[1899]: time="2025-11-05T16:03:38.408626172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vw92k,Uid:a38961a6-3ae9-4766-af33-07fe9a74faa6,Namespace:kube-system,Attempt:0,}" Nov 5 16:03:38.429289 containerd[1899]: time="2025-11-05T16:03:38.428923714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f9c9c664f-fhtxd,Uid:5b76ecda-67c8-4ccb-b2a9-6e4178612c50,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:40.226005 containerd[1899]: time="2025-11-05T16:03:40.225357540Z" level=error msg="Failed to destroy network for sandbox \"fe7e1396fea340764cb1a9f6452b0a05a20b9093a446be7587ebad75f1765a2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:40.233177 systemd[1]: run-netns-cni\x2dbea1b280\x2dcc4c\x2dce54\x2dc56f\x2daab20328b36c.mount: Deactivated successfully. Nov 5 16:03:40.240540 containerd[1899]: time="2025-11-05T16:03:40.240461210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f9c9c664f-fhtxd,Uid:5b76ecda-67c8-4ccb-b2a9-6e4178612c50,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe7e1396fea340764cb1a9f6452b0a05a20b9093a446be7587ebad75f1765a2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:40.286755 kubelet[3198]: E1105 16:03:40.286592 3198 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe7e1396fea340764cb1a9f6452b0a05a20b9093a446be7587ebad75f1765a2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:40.289027 kubelet[3198]: E1105 16:03:40.287581 3198 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe7e1396fea340764cb1a9f6452b0a05a20b9093a446be7587ebad75f1765a2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f9c9c664f-fhtxd" Nov 5 16:03:40.289027 kubelet[3198]: E1105 16:03:40.287637 3198 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe7e1396fea340764cb1a9f6452b0a05a20b9093a446be7587ebad75f1765a2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f9c9c664f-fhtxd" Nov 5 16:03:40.289027 kubelet[3198]: E1105 16:03:40.287729 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f9c9c664f-fhtxd_calico-system(5b76ecda-67c8-4ccb-b2a9-6e4178612c50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f9c9c664f-fhtxd_calico-system(5b76ecda-67c8-4ccb-b2a9-6e4178612c50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe7e1396fea340764cb1a9f6452b0a05a20b9093a446be7587ebad75f1765a2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f9c9c664f-fhtxd" podUID="5b76ecda-67c8-4ccb-b2a9-6e4178612c50" Nov 5 16:03:40.316283 kubelet[3198]: E1105 16:03:40.303796 3198 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d114653c13594faeea9c785a85020375389c5645ebcbf226c3f5d1bcf03e102\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:40.316283 kubelet[3198]: E1105 16:03:40.303857 3198 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d114653c13594faeea9c785a85020375389c5645ebcbf226c3f5d1bcf03e102\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8fffdb464-q5zql" Nov 5 16:03:40.316283 kubelet[3198]: E1105 16:03:40.303964 3198 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02867310a8723e553520526a398f010778e19fb8517c4f74deff091a022c2578\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:40.316283 kubelet[3198]: E1105 16:03:40.304445 3198 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02867310a8723e553520526a398f010778e19fb8517c4f74deff091a022c2578\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-glcrv" Nov 5 16:03:40.302395 systemd[1]: run-netns-cni\x2d81fc5697\x2d3ce6\x2d6d88\x2db2a3\x2dd8893301a2ff.mount: Deactivated successfully. Nov 5 16:03:40.316591 containerd[1899]: time="2025-11-05T16:03:40.295155899Z" level=error msg="Failed to destroy network for sandbox \"02867310a8723e553520526a398f010778e19fb8517c4f74deff091a022c2578\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:40.316591 containerd[1899]: time="2025-11-05T16:03:40.298387144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-glcrv,Uid:094f6189-9d15-415e-a528-9777b0761bec,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"02867310a8723e553520526a398f010778e19fb8517c4f74deff091a022c2578\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:40.316591 containerd[1899]: time="2025-11-05T16:03:40.298594150Z" level=error msg="Failed to destroy network for sandbox \"3d114653c13594faeea9c785a85020375389c5645ebcbf226c3f5d1bcf03e102\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:40.316591 containerd[1899]: time="2025-11-05T16:03:40.303193518Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8fffdb464-q5zql,Uid:436b2852-bb09-4690-8210-c17e2fe57e96,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d114653c13594faeea9c785a85020375389c5645ebcbf226c3f5d1bcf03e102\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:40.319712 kubelet[3198]: E1105 16:03:40.304480 3198 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02867310a8723e553520526a398f010778e19fb8517c4f74deff091a022c2578\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-glcrv" Nov 5 16:03:40.319712 kubelet[3198]: E1105 16:03:40.304553 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-glcrv_kube-system(094f6189-9d15-415e-a528-9777b0761bec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-glcrv_kube-system(094f6189-9d15-415e-a528-9777b0761bec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02867310a8723e553520526a398f010778e19fb8517c4f74deff091a022c2578\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-glcrv" podUID="094f6189-9d15-415e-a528-9777b0761bec" Nov 5 16:03:40.319712 kubelet[3198]: E1105 16:03:40.303883 3198 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d114653c13594faeea9c785a85020375389c5645ebcbf226c3f5d1bcf03e102\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8fffdb464-q5zql" Nov 5 16:03:40.313651 systemd[1]: run-netns-cni\x2d62761f7b\x2d3670\x2d81b1\x2d707a\x2dc56387d1f8ab.mount: Deactivated successfully. Nov 5 16:03:40.323498 containerd[1899]: time="2025-11-05T16:03:40.303577109Z" level=error msg="Failed to destroy network for sandbox \"9765830362a9c7d9979d82d213159dfcbee463134dcad9cd85fcc8b93eaf6650\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:40.323498 containerd[1899]: time="2025-11-05T16:03:40.316045324Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bb97b9bc-lxdn8,Uid:7aee59c4-6ad2-4a22-8442-b9f44431ab0e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9765830362a9c7d9979d82d213159dfcbee463134dcad9cd85fcc8b93eaf6650\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:40.323498 containerd[1899]: time="2025-11-05T16:03:40.317804858Z" level=error msg="Failed to destroy network for sandbox \"e5123eeac8ebf50fe99dee97ff7006bc448e26ea6633bbbb198744c77a76fe92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:40.323802 kubelet[3198]: E1105 16:03:40.309136 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8fffdb464-q5zql_calico-apiserver(436b2852-bb09-4690-8210-c17e2fe57e96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8fffdb464-q5zql_calico-apiserver(436b2852-bb09-4690-8210-c17e2fe57e96)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d114653c13594faeea9c785a85020375389c5645ebcbf226c3f5d1bcf03e102\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8fffdb464-q5zql" podUID="436b2852-bb09-4690-8210-c17e2fe57e96" Nov 5 16:03:40.314102 systemd[1]: run-netns-cni\x2da6236cb4\x2d7d4b\x2db5fa\x2d8488\x2dfafe7c9a6fb5.mount: Deactivated successfully. Nov 5 16:03:40.321557 systemd[1]: run-netns-cni\x2d6a7953e3\x2d5f60\x2dc1b5\x2d0542\x2d9c2b1852c38d.mount: Deactivated successfully. Nov 5 16:03:40.328850 containerd[1899]: time="2025-11-05T16:03:40.326952678Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-qgjqk,Uid:c259a7b3-0c1e-4695-b558-e42d28fb4911,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5123eeac8ebf50fe99dee97ff7006bc448e26ea6633bbbb198744c77a76fe92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:40.328850 containerd[1899]: time="2025-11-05T16:03:40.328470390Z" level=error msg="Failed to destroy network for sandbox \"e4bc0e21bd523b1b710b80f8893a4fc6d863511a296aaf5612e7605d50d8e906\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:40.329142 kubelet[3198]: E1105 16:03:40.327225 3198 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5123eeac8ebf50fe99dee97ff7006bc448e26ea6633bbbb198744c77a76fe92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:40.329142 kubelet[3198]: E1105 16:03:40.327292 3198 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5123eeac8ebf50fe99dee97ff7006bc448e26ea6633bbbb198744c77a76fe92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-qgjqk" Nov 5 16:03:40.329142 kubelet[3198]: E1105 16:03:40.327325 3198 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5123eeac8ebf50fe99dee97ff7006bc448e26ea6633bbbb198744c77a76fe92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-qgjqk" Nov 5 16:03:40.329737 kubelet[3198]: E1105 16:03:40.327390 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-qgjqk_calico-system(c259a7b3-0c1e-4695-b558-e42d28fb4911)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-qgjqk_calico-system(c259a7b3-0c1e-4695-b558-e42d28fb4911)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5123eeac8ebf50fe99dee97ff7006bc448e26ea6633bbbb198744c77a76fe92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-qgjqk" podUID="c259a7b3-0c1e-4695-b558-e42d28fb4911" Nov 5 16:03:40.329737 kubelet[3198]: E1105 16:03:40.329378 3198 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9765830362a9c7d9979d82d213159dfcbee463134dcad9cd85fcc8b93eaf6650\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:40.329737 kubelet[3198]: E1105 16:03:40.329463 3198 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9765830362a9c7d9979d82d213159dfcbee463134dcad9cd85fcc8b93eaf6650\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bb97b9bc-lxdn8" Nov 5 16:03:40.329924 kubelet[3198]: E1105 16:03:40.329494 3198 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9765830362a9c7d9979d82d213159dfcbee463134dcad9cd85fcc8b93eaf6650\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bb97b9bc-lxdn8" Nov 5 16:03:40.330532 kubelet[3198]: E1105 16:03:40.330238 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6bb97b9bc-lxdn8_calico-system(7aee59c4-6ad2-4a22-8442-b9f44431ab0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6bb97b9bc-lxdn8_calico-system(7aee59c4-6ad2-4a22-8442-b9f44431ab0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9765830362a9c7d9979d82d213159dfcbee463134dcad9cd85fcc8b93eaf6650\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6bb97b9bc-lxdn8" podUID="7aee59c4-6ad2-4a22-8442-b9f44431ab0e" Nov 5 16:03:40.332433 systemd[1]: run-netns-cni\x2d7853bee1\x2df620\x2dc196\x2dbbc4\x2dd7ee86f0d2bc.mount: Deactivated successfully. Nov 5 16:03:40.333582 containerd[1899]: time="2025-11-05T16:03:40.333509826Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vw92k,Uid:a38961a6-3ae9-4766-af33-07fe9a74faa6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4bc0e21bd523b1b710b80f8893a4fc6d863511a296aaf5612e7605d50d8e906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:40.334316 kubelet[3198]: E1105 16:03:40.334252 3198 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4bc0e21bd523b1b710b80f8893a4fc6d863511a296aaf5612e7605d50d8e906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:40.334660 kubelet[3198]: E1105 16:03:40.334418 3198 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4bc0e21bd523b1b710b80f8893a4fc6d863511a296aaf5612e7605d50d8e906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-vw92k" Nov 5 16:03:40.334660 kubelet[3198]: E1105 16:03:40.334448 3198 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4bc0e21bd523b1b710b80f8893a4fc6d863511a296aaf5612e7605d50d8e906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-vw92k" Nov 5 16:03:40.335063 kubelet[3198]: E1105 16:03:40.334764 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-vw92k_kube-system(a38961a6-3ae9-4766-af33-07fe9a74faa6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-vw92k_kube-system(a38961a6-3ae9-4766-af33-07fe9a74faa6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4bc0e21bd523b1b710b80f8893a4fc6d863511a296aaf5612e7605d50d8e906\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-vw92k" podUID="a38961a6-3ae9-4766-af33-07fe9a74faa6" Nov 5 16:03:40.338995 containerd[1899]: time="2025-11-05T16:03:40.338882774Z" level=error msg="Failed to destroy network for sandbox \"d2511f1d4da05b29481d752c388c862250f566e37df8f7d18f9f3b42d764c3b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:40.341316 containerd[1899]: time="2025-11-05T16:03:40.340651552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8fffdb464-mjcqs,Uid:96835183-cb2e-4158-994a-2b18537288b4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2511f1d4da05b29481d752c388c862250f566e37df8f7d18f9f3b42d764c3b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:40.341513 kubelet[3198]: E1105 16:03:40.340969 3198 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2511f1d4da05b29481d752c388c862250f566e37df8f7d18f9f3b42d764c3b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:40.341513 kubelet[3198]: E1105 16:03:40.341054 3198 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2511f1d4da05b29481d752c388c862250f566e37df8f7d18f9f3b42d764c3b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8fffdb464-mjcqs" Nov 5 16:03:40.341513 kubelet[3198]: E1105 16:03:40.341129 3198 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2511f1d4da05b29481d752c388c862250f566e37df8f7d18f9f3b42d764c3b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8fffdb464-mjcqs" Nov 5 16:03:40.341680 kubelet[3198]: E1105 16:03:40.341256 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8fffdb464-mjcqs_calico-apiserver(96835183-cb2e-4158-994a-2b18537288b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8fffdb464-mjcqs_calico-apiserver(96835183-cb2e-4158-994a-2b18537288b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2511f1d4da05b29481d752c388c862250f566e37df8f7d18f9f3b42d764c3b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8fffdb464-mjcqs" podUID="96835183-cb2e-4158-994a-2b18537288b4" Nov 5 16:03:40.347072 containerd[1899]: time="2025-11-05T16:03:40.346747379Z" level=error msg="Failed to destroy network for sandbox \"2c249d9a00d01fb4adf3fdbda6d15abd0c12934017319781672fa6c7961ece68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:40.348285 containerd[1899]: time="2025-11-05T16:03:40.348238342Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7k4x5,Uid:d0a5c89c-b602-442e-811b-c3720b9add41,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c249d9a00d01fb4adf3fdbda6d15abd0c12934017319781672fa6c7961ece68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:40.348992 kubelet[3198]: E1105 16:03:40.348654 3198 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c249d9a00d01fb4adf3fdbda6d15abd0c12934017319781672fa6c7961ece68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:40.348992 kubelet[3198]: E1105 16:03:40.348711 3198 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c249d9a00d01fb4adf3fdbda6d15abd0c12934017319781672fa6c7961ece68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7k4x5" Nov 5 16:03:40.348992 kubelet[3198]: E1105 16:03:40.348737 3198 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c249d9a00d01fb4adf3fdbda6d15abd0c12934017319781672fa6c7961ece68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7k4x5" Nov 5 16:03:40.349392 kubelet[3198]: E1105 16:03:40.348799 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7k4x5_calico-system(d0a5c89c-b602-442e-811b-c3720b9add41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7k4x5_calico-system(d0a5c89c-b602-442e-811b-c3720b9add41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c249d9a00d01fb4adf3fdbda6d15abd0c12934017319781672fa6c7961ece68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7k4x5" podUID="d0a5c89c-b602-442e-811b-c3720b9add41" Nov 5 16:03:41.229965 systemd[1]: run-netns-cni\x2d8661ae48\x2d0acb\x2d1fa8\x2d3eb4\x2db26a364f452d.mount: Deactivated successfully. Nov 5 16:03:41.230535 systemd[1]: run-netns-cni\x2dce95040c\x2d14a3\x2d6dc8\x2d336f\x2d780c1cbb5263.mount: Deactivated successfully. Nov 5 16:03:46.684363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4203143866.mount: Deactivated successfully. Nov 5 16:03:46.771506 containerd[1899]: time="2025-11-05T16:03:46.770549310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:46.800859 containerd[1899]: time="2025-11-05T16:03:46.799800766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 5 16:03:46.836300 containerd[1899]: time="2025-11-05T16:03:46.835603704Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:46.839690 containerd[1899]: time="2025-11-05T16:03:46.839644330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:03:46.843832 containerd[1899]: time="2025-11-05T16:03:46.843758762Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.471947408s" Nov 5 16:03:46.843832 containerd[1899]: time="2025-11-05T16:03:46.843813085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 5 16:03:46.897431 containerd[1899]: time="2025-11-05T16:03:46.897307788Z" level=info msg="CreateContainer within sandbox \"5b5cee7c3d5369d1a42eada0d5095779fd3ec16d5d539b05d24189c97c6d2bcb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 16:03:46.976511 containerd[1899]: time="2025-11-05T16:03:46.976179451Z" level=info msg="Container bc202849b093544ceac99fd4d85bc89166703102cfcef98e6208384d93753469: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:03:46.979050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2294818454.mount: Deactivated successfully. Nov 5 16:03:47.038362 containerd[1899]: time="2025-11-05T16:03:47.038219733Z" level=info msg="CreateContainer within sandbox \"5b5cee7c3d5369d1a42eada0d5095779fd3ec16d5d539b05d24189c97c6d2bcb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bc202849b093544ceac99fd4d85bc89166703102cfcef98e6208384d93753469\"" Nov 5 16:03:47.050940 containerd[1899]: time="2025-11-05T16:03:47.050887241Z" level=info msg="StartContainer for \"bc202849b093544ceac99fd4d85bc89166703102cfcef98e6208384d93753469\"" Nov 5 16:03:47.057186 containerd[1899]: time="2025-11-05T16:03:47.057136294Z" level=info msg="connecting to shim bc202849b093544ceac99fd4d85bc89166703102cfcef98e6208384d93753469" address="unix:///run/containerd/s/ed4a6ce81a868c833e7536a5c2abb9dc50dbe758e6c11fd0fce108264870aaba" protocol=ttrpc version=3 Nov 5 16:03:47.170466 systemd[1]: Started cri-containerd-bc202849b093544ceac99fd4d85bc89166703102cfcef98e6208384d93753469.scope - libcontainer container bc202849b093544ceac99fd4d85bc89166703102cfcef98e6208384d93753469. Nov 5 16:03:47.247038 containerd[1899]: time="2025-11-05T16:03:47.246500754Z" level=info msg="StartContainer for \"bc202849b093544ceac99fd4d85bc89166703102cfcef98e6208384d93753469\" returns successfully" Nov 5 16:03:47.508543 kubelet[3198]: I1105 16:03:47.505192 3198 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kn7b4" podStartSLOduration=1.839021461 podStartE2EDuration="22.505165085s" podCreationTimestamp="2025-11-05 16:03:25 +0000 UTC" firstStartedPulling="2025-11-05 16:03:26.178506027 +0000 UTC m=+54.364561801" lastFinishedPulling="2025-11-05 16:03:46.844649653 +0000 UTC m=+75.030705425" observedRunningTime="2025-11-05 16:03:47.498041931 +0000 UTC m=+75.684097746" watchObservedRunningTime="2025-11-05 16:03:47.505165085 +0000 UTC m=+75.691220877" Nov 5 16:03:51.011048 containerd[1899]: time="2025-11-05T16:03:51.010740770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-qgjqk,Uid:c259a7b3-0c1e-4695-b558-e42d28fb4911,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:51.026235 containerd[1899]: time="2025-11-05T16:03:51.025235603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-glcrv,Uid:094f6189-9d15-415e-a528-9777b0761bec,Namespace:kube-system,Attempt:0,}" Nov 5 16:03:51.265826 containerd[1899]: time="2025-11-05T16:03:51.265701423Z" level=error msg="Failed to destroy network for sandbox \"f48f0b8b5365151e153f0a93fe5ee6fd1533b21bb45ddb4fb4e6420382457f19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:51.272019 containerd[1899]: time="2025-11-05T16:03:51.271637363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-glcrv,Uid:094f6189-9d15-415e-a528-9777b0761bec,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f48f0b8b5365151e153f0a93fe5ee6fd1533b21bb45ddb4fb4e6420382457f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:51.273191 kubelet[3198]: E1105 16:03:51.272939 3198 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f48f0b8b5365151e153f0a93fe5ee6fd1533b21bb45ddb4fb4e6420382457f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:51.274441 systemd[1]: run-netns-cni\x2da2bf220e\x2d8037\x2dd32a\x2da509\x2dd57a452f6002.mount: Deactivated successfully. Nov 5 16:03:51.275527 kubelet[3198]: E1105 16:03:51.274283 3198 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f48f0b8b5365151e153f0a93fe5ee6fd1533b21bb45ddb4fb4e6420382457f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-glcrv" Nov 5 16:03:51.275527 kubelet[3198]: E1105 16:03:51.275298 3198 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f48f0b8b5365151e153f0a93fe5ee6fd1533b21bb45ddb4fb4e6420382457f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-glcrv" Nov 5 16:03:51.275527 kubelet[3198]: E1105 16:03:51.275404 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-glcrv_kube-system(094f6189-9d15-415e-a528-9777b0761bec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-glcrv_kube-system(094f6189-9d15-415e-a528-9777b0761bec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f48f0b8b5365151e153f0a93fe5ee6fd1533b21bb45ddb4fb4e6420382457f19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-glcrv" podUID="094f6189-9d15-415e-a528-9777b0761bec" Nov 5 16:03:51.285715 containerd[1899]: time="2025-11-05T16:03:51.285667931Z" level=error msg="Failed to destroy network for sandbox \"3469f095968853d81b108926649f5fbc62eb5ba71a858d05a8da09a2cc63a03e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:51.291012 systemd[1]: run-netns-cni\x2d6021fd43\x2d9160\x2ded02\x2d556b\x2dc78e6582abdb.mount: Deactivated successfully. Nov 5 16:03:51.292426 containerd[1899]: time="2025-11-05T16:03:51.292367909Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-qgjqk,Uid:c259a7b3-0c1e-4695-b558-e42d28fb4911,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3469f095968853d81b108926649f5fbc62eb5ba71a858d05a8da09a2cc63a03e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:51.296236 kubelet[3198]: E1105 16:03:51.296175 3198 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3469f095968853d81b108926649f5fbc62eb5ba71a858d05a8da09a2cc63a03e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:51.296443 kubelet[3198]: E1105 16:03:51.296241 3198 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3469f095968853d81b108926649f5fbc62eb5ba71a858d05a8da09a2cc63a03e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-qgjqk" Nov 5 16:03:51.296443 kubelet[3198]: E1105 16:03:51.296265 3198 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3469f095968853d81b108926649f5fbc62eb5ba71a858d05a8da09a2cc63a03e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-qgjqk" Nov 5 16:03:51.296443 kubelet[3198]: E1105 16:03:51.296351 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-qgjqk_calico-system(c259a7b3-0c1e-4695-b558-e42d28fb4911)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-qgjqk_calico-system(c259a7b3-0c1e-4695-b558-e42d28fb4911)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3469f095968853d81b108926649f5fbc62eb5ba71a858d05a8da09a2cc63a03e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-qgjqk" podUID="c259a7b3-0c1e-4695-b558-e42d28fb4911" Nov 5 16:03:51.829015 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 16:03:51.834434 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 16:03:51.936421 containerd[1899]: time="2025-11-05T16:03:51.936379189Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc202849b093544ceac99fd4d85bc89166703102cfcef98e6208384d93753469\" id:\"8a9d594ed9ad82ee4df09f19e7ea3043a6bd5b0f993a55e0ee32cf3f30c8277c\" pid:4434 exit_status:1 exited_at:{seconds:1762358631 nanos:936029482}" Nov 5 16:03:52.012177 containerd[1899]: time="2025-11-05T16:03:52.012127738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vw92k,Uid:a38961a6-3ae9-4766-af33-07fe9a74faa6,Namespace:kube-system,Attempt:0,}" Nov 5 16:03:52.158275 containerd[1899]: time="2025-11-05T16:03:52.158217236Z" level=error msg="Failed to destroy network for sandbox \"c93ef2546e87d57cc44e226717cc74d11e9664a12d11a97cb933a34e45359066\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:52.163413 systemd[1]: run-netns-cni\x2dfd9ccb7c\x2dfa2a\x2d898b\x2d4d18\x2d6e764a8892a2.mount: Deactivated successfully. Nov 5 16:03:52.164520 containerd[1899]: time="2025-11-05T16:03:52.164376749Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vw92k,Uid:a38961a6-3ae9-4766-af33-07fe9a74faa6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c93ef2546e87d57cc44e226717cc74d11e9664a12d11a97cb933a34e45359066\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:52.165082 kubelet[3198]: E1105 16:03:52.164713 3198 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c93ef2546e87d57cc44e226717cc74d11e9664a12d11a97cb933a34e45359066\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 16:03:52.165082 kubelet[3198]: E1105 16:03:52.164857 3198 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c93ef2546e87d57cc44e226717cc74d11e9664a12d11a97cb933a34e45359066\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-vw92k" Nov 5 16:03:52.165082 kubelet[3198]: E1105 16:03:52.164893 3198 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c93ef2546e87d57cc44e226717cc74d11e9664a12d11a97cb933a34e45359066\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-vw92k" Nov 5 16:03:52.165312 kubelet[3198]: E1105 16:03:52.165034 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-vw92k_kube-system(a38961a6-3ae9-4766-af33-07fe9a74faa6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-vw92k_kube-system(a38961a6-3ae9-4766-af33-07fe9a74faa6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c93ef2546e87d57cc44e226717cc74d11e9664a12d11a97cb933a34e45359066\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-vw92k" podUID="a38961a6-3ae9-4766-af33-07fe9a74faa6" Nov 5 16:03:52.401003 containerd[1899]: time="2025-11-05T16:03:52.400943517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc202849b093544ceac99fd4d85bc89166703102cfcef98e6208384d93753469\" id:\"94ea5a6a5e4c6151391ea421dce923b1a3f8e59f93f3722f29994b6e3114fbfc\" pid:4535 exit_status:1 exited_at:{seconds:1762358632 nanos:400236517}" Nov 5 16:03:52.955998 kubelet[3198]: I1105 16:03:52.955770 3198 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlc72\" (UniqueName: \"kubernetes.io/projected/7aee59c4-6ad2-4a22-8442-b9f44431ab0e-kube-api-access-dlc72\") pod \"7aee59c4-6ad2-4a22-8442-b9f44431ab0e\" (UID: \"7aee59c4-6ad2-4a22-8442-b9f44431ab0e\") " Nov 5 16:03:52.955998 kubelet[3198]: I1105 16:03:52.955813 3198 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7aee59c4-6ad2-4a22-8442-b9f44431ab0e-whisker-backend-key-pair\") pod \"7aee59c4-6ad2-4a22-8442-b9f44431ab0e\" (UID: \"7aee59c4-6ad2-4a22-8442-b9f44431ab0e\") " Nov 5 16:03:52.955998 kubelet[3198]: I1105 16:03:52.955853 3198 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7aee59c4-6ad2-4a22-8442-b9f44431ab0e-whisker-ca-bundle\") pod \"7aee59c4-6ad2-4a22-8442-b9f44431ab0e\" (UID: \"7aee59c4-6ad2-4a22-8442-b9f44431ab0e\") " Nov 5 16:03:52.959029 kubelet[3198]: I1105 16:03:52.958821 3198 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aee59c4-6ad2-4a22-8442-b9f44431ab0e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7aee59c4-6ad2-4a22-8442-b9f44431ab0e" (UID: "7aee59c4-6ad2-4a22-8442-b9f44431ab0e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 16:03:52.987385 kubelet[3198]: I1105 16:03:52.987087 3198 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aee59c4-6ad2-4a22-8442-b9f44431ab0e-kube-api-access-dlc72" (OuterVolumeSpecName: "kube-api-access-dlc72") pod "7aee59c4-6ad2-4a22-8442-b9f44431ab0e" (UID: "7aee59c4-6ad2-4a22-8442-b9f44431ab0e"). InnerVolumeSpecName "kube-api-access-dlc72". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 16:03:52.988169 kubelet[3198]: I1105 16:03:52.988136 3198 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aee59c4-6ad2-4a22-8442-b9f44431ab0e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7aee59c4-6ad2-4a22-8442-b9f44431ab0e" (UID: "7aee59c4-6ad2-4a22-8442-b9f44431ab0e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 16:03:52.988634 systemd[1]: var-lib-kubelet-pods-7aee59c4\x2d6ad2\x2d4a22\x2d8442\x2db9f44431ab0e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddlc72.mount: Deactivated successfully. Nov 5 16:03:52.993272 systemd[1]: var-lib-kubelet-pods-7aee59c4\x2d6ad2\x2d4a22\x2d8442\x2db9f44431ab0e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 16:03:53.008039 containerd[1899]: time="2025-11-05T16:03:53.007910515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f9c9c664f-fhtxd,Uid:5b76ecda-67c8-4ccb-b2a9-6e4178612c50,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:53.018004 containerd[1899]: time="2025-11-05T16:03:53.017865706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8fffdb464-q5zql,Uid:436b2852-bb09-4690-8210-c17e2fe57e96,Namespace:calico-apiserver,Attempt:0,}" Nov 5 16:03:53.057408 kubelet[3198]: I1105 16:03:53.057350 3198 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7aee59c4-6ad2-4a22-8442-b9f44431ab0e-whisker-ca-bundle\") on node \"ip-172-31-16-11\" DevicePath \"\"" Nov 5 16:03:53.057408 kubelet[3198]: I1105 16:03:53.057389 3198 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dlc72\" (UniqueName: \"kubernetes.io/projected/7aee59c4-6ad2-4a22-8442-b9f44431ab0e-kube-api-access-dlc72\") on node \"ip-172-31-16-11\" DevicePath \"\"" Nov 5 16:03:53.057408 kubelet[3198]: I1105 16:03:53.057403 3198 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7aee59c4-6ad2-4a22-8442-b9f44431ab0e-whisker-backend-key-pair\") on node \"ip-172-31-16-11\" DevicePath \"\"" Nov 5 16:03:53.509524 systemd[1]: Removed slice kubepods-besteffort-pod7aee59c4_6ad2_4a22_8442_b9f44431ab0e.slice - libcontainer container kubepods-besteffort-pod7aee59c4_6ad2_4a22_8442_b9f44431ab0e.slice. Nov 5 16:03:53.815828 systemd[1]: Created slice kubepods-besteffort-pod34aa5cb5_d018_431d_960a_4659dc21c0b7.slice - libcontainer container kubepods-besteffort-pod34aa5cb5_d018_431d_960a_4659dc21c0b7.slice. Nov 5 16:03:53.862893 kubelet[3198]: I1105 16:03:53.862576 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/34aa5cb5-d018-431d-960a-4659dc21c0b7-whisker-backend-key-pair\") pod \"whisker-c75ccf967-dqkw4\" (UID: \"34aa5cb5-d018-431d-960a-4659dc21c0b7\") " pod="calico-system/whisker-c75ccf967-dqkw4" Nov 5 16:03:53.862893 kubelet[3198]: I1105 16:03:53.862636 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34aa5cb5-d018-431d-960a-4659dc21c0b7-whisker-ca-bundle\") pod \"whisker-c75ccf967-dqkw4\" (UID: \"34aa5cb5-d018-431d-960a-4659dc21c0b7\") " pod="calico-system/whisker-c75ccf967-dqkw4" Nov 5 16:03:53.863456 kubelet[3198]: I1105 16:03:53.863393 3198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwks2\" (UniqueName: \"kubernetes.io/projected/34aa5cb5-d018-431d-960a-4659dc21c0b7-kube-api-access-hwks2\") pod \"whisker-c75ccf967-dqkw4\" (UID: \"34aa5cb5-d018-431d-960a-4659dc21c0b7\") " pod="calico-system/whisker-c75ccf967-dqkw4" Nov 5 16:03:54.006481 kubelet[3198]: I1105 16:03:54.006437 3198 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aee59c4-6ad2-4a22-8442-b9f44431ab0e" path="/var/lib/kubelet/pods/7aee59c4-6ad2-4a22-8442-b9f44431ab0e/volumes" Nov 5 16:03:54.007998 containerd[1899]: time="2025-11-05T16:03:54.007740086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7k4x5,Uid:d0a5c89c-b602-442e-811b-c3720b9add41,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:54.127893 containerd[1899]: time="2025-11-05T16:03:54.127852464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c75ccf967-dqkw4,Uid:34aa5cb5-d018-431d-960a-4659dc21c0b7,Namespace:calico-system,Attempt:0,}" Nov 5 16:03:55.008695 containerd[1899]: time="2025-11-05T16:03:55.008648180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8fffdb464-mjcqs,Uid:96835183-cb2e-4158-994a-2b18537288b4,Namespace:calico-apiserver,Attempt:0,}" Nov 5 16:03:55.755455 systemd-networkd[1473]: vxlan.calico: Link UP Nov 5 16:03:55.755758 systemd-networkd[1473]: vxlan.calico: Gained carrier Nov 5 16:03:55.760441 (udev-worker)[4792]: Network interface NamePolicy= disabled on kernel command line. Nov 5 16:03:55.799796 (udev-worker)[4787]: Network interface NamePolicy= disabled on kernel command line. Nov 5 16:03:55.801793 (udev-worker)[4804]: Network interface NamePolicy= disabled on kernel command line. Nov 5 16:03:56.664725 systemd-networkd[1473]: calicf906d19911: Link UP Nov 5 16:03:56.666535 systemd-networkd[1473]: calicf906d19911: Gained carrier Nov 5 16:03:56.700688 containerd[1899]: 2025-11-05 16:03:54.057 [INFO][4609] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 16:03:56.700688 containerd[1899]: 2025-11-05 16:03:54.073 [INFO][4609] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--11-k8s-csi--node--driver--7k4x5-eth0 csi-node-driver- calico-system d0a5c89c-b602-442e-811b-c3720b9add41 755 0 2025-11-05 16:03:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-16-11 csi-node-driver-7k4x5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicf906d19911 [] [] }} ContainerID="cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d" Namespace="calico-system" Pod="csi-node-driver-7k4x5" WorkloadEndpoint="ip--172--31--16--11-k8s-csi--node--driver--7k4x5-" Nov 5 16:03:56.700688 containerd[1899]: 2025-11-05 16:03:54.073 [INFO][4609] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d" Namespace="calico-system" Pod="csi-node-driver-7k4x5" WorkloadEndpoint="ip--172--31--16--11-k8s-csi--node--driver--7k4x5-eth0" Nov 5 16:03:56.700688 containerd[1899]: 2025-11-05 16:03:56.486 [INFO][4624] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d" HandleID="k8s-pod-network.cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d" Workload="ip--172--31--16--11-k8s-csi--node--driver--7k4x5-eth0" Nov 5 16:03:56.701436 containerd[1899]: 2025-11-05 16:03:56.488 [INFO][4624] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d" HandleID="k8s-pod-network.cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d" Workload="ip--172--31--16--11-k8s-csi--node--driver--7k4x5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dee10), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-11", "pod":"csi-node-driver-7k4x5", "timestamp":"2025-11-05 16:03:56.486773944 +0000 UTC"}, Hostname:"ip-172-31-16-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:03:56.701436 containerd[1899]: 2025-11-05 16:03:56.488 [INFO][4624] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:03:56.701436 containerd[1899]: 2025-11-05 16:03:56.488 [INFO][4624] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:03:56.701436 containerd[1899]: 2025-11-05 16:03:56.491 [INFO][4624] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-11' Nov 5 16:03:56.701436 containerd[1899]: 2025-11-05 16:03:56.514 [INFO][4624] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d" host="ip-172-31-16-11" Nov 5 16:03:56.701436 containerd[1899]: 2025-11-05 16:03:56.616 [INFO][4624] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-11" Nov 5 16:03:56.701436 containerd[1899]: 2025-11-05 16:03:56.624 [INFO][4624] ipam/ipam.go 511: Trying affinity for 192.168.109.0/26 host="ip-172-31-16-11" Nov 5 16:03:56.701436 containerd[1899]: 2025-11-05 16:03:56.627 [INFO][4624] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.0/26 host="ip-172-31-16-11" Nov 5 16:03:56.701436 containerd[1899]: 2025-11-05 16:03:56.631 [INFO][4624] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.0/26 host="ip-172-31-16-11" Nov 5 16:03:56.701436 containerd[1899]: 2025-11-05 16:03:56.631 [INFO][4624] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.109.0/26 handle="k8s-pod-network.cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d" host="ip-172-31-16-11" Nov 5 16:03:56.702207 containerd[1899]: 2025-11-05 16:03:56.633 [INFO][4624] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d Nov 5 16:03:56.702207 containerd[1899]: 2025-11-05 16:03:56.642 [INFO][4624] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.109.0/26 handle="k8s-pod-network.cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d" host="ip-172-31-16-11" Nov 5 16:03:56.702207 containerd[1899]: 2025-11-05 16:03:56.652 [INFO][4624] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.109.1/26] block=192.168.109.0/26 handle="k8s-pod-network.cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d" host="ip-172-31-16-11" Nov 5 16:03:56.702207 containerd[1899]: 2025-11-05 16:03:56.652 [INFO][4624] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.1/26] handle="k8s-pod-network.cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d" host="ip-172-31-16-11" Nov 5 16:03:56.702207 containerd[1899]: 2025-11-05 16:03:56.652 [INFO][4624] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:03:56.702207 containerd[1899]: 2025-11-05 16:03:56.652 [INFO][4624] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.109.1/26] IPv6=[] ContainerID="cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d" HandleID="k8s-pod-network.cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d" Workload="ip--172--31--16--11-k8s-csi--node--driver--7k4x5-eth0" Nov 5 16:03:56.702360 containerd[1899]: 2025-11-05 16:03:56.656 [INFO][4609] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d" Namespace="calico-system" Pod="csi-node-driver-7k4x5" WorkloadEndpoint="ip--172--31--16--11-k8s-csi--node--driver--7k4x5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--11-k8s-csi--node--driver--7k4x5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d0a5c89c-b602-442e-811b-c3720b9add41", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-11", ContainerID:"", Pod:"csi-node-driver-7k4x5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicf906d19911", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:56.702429 containerd[1899]: 2025-11-05 16:03:56.657 [INFO][4609] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.1/32] ContainerID="cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d" Namespace="calico-system" Pod="csi-node-driver-7k4x5" WorkloadEndpoint="ip--172--31--16--11-k8s-csi--node--driver--7k4x5-eth0" Nov 5 16:03:56.702429 containerd[1899]: 2025-11-05 16:03:56.657 [INFO][4609] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf906d19911 ContainerID="cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d" Namespace="calico-system" Pod="csi-node-driver-7k4x5" WorkloadEndpoint="ip--172--31--16--11-k8s-csi--node--driver--7k4x5-eth0" Nov 5 16:03:56.702429 containerd[1899]: 2025-11-05 16:03:56.668 [INFO][4609] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d" Namespace="calico-system" Pod="csi-node-driver-7k4x5" WorkloadEndpoint="ip--172--31--16--11-k8s-csi--node--driver--7k4x5-eth0" Nov 5 16:03:56.702511 containerd[1899]: 2025-11-05 16:03:56.669 [INFO][4609] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d" Namespace="calico-system" Pod="csi-node-driver-7k4x5" WorkloadEndpoint="ip--172--31--16--11-k8s-csi--node--driver--7k4x5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--11-k8s-csi--node--driver--7k4x5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d0a5c89c-b602-442e-811b-c3720b9add41", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-11", ContainerID:"cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d", Pod:"csi-node-driver-7k4x5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicf906d19911", MAC:"5e:02:76:42:e8:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:56.702567 containerd[1899]: 2025-11-05 16:03:56.694 [INFO][4609] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d" Namespace="calico-system" Pod="csi-node-driver-7k4x5" WorkloadEndpoint="ip--172--31--16--11-k8s-csi--node--driver--7k4x5-eth0" Nov 5 16:03:56.784872 systemd-networkd[1473]: cali042dec9bc69: Link UP Nov 5 16:03:56.788239 systemd-networkd[1473]: cali042dec9bc69: Gained carrier Nov 5 16:03:56.839749 containerd[1899]: 2025-11-05 16:03:54.173 [INFO][4638] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 16:03:56.839749 containerd[1899]: 2025-11-05 16:03:54.186 [INFO][4638] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--11-k8s-whisker--c75ccf967--dqkw4-eth0 whisker-c75ccf967- calico-system 34aa5cb5-d018-431d-960a-4659dc21c0b7 953 0 2025-11-05 16:03:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:c75ccf967 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-16-11 whisker-c75ccf967-dqkw4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali042dec9bc69 [] [] }} ContainerID="3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34" Namespace="calico-system" Pod="whisker-c75ccf967-dqkw4" WorkloadEndpoint="ip--172--31--16--11-k8s-whisker--c75ccf967--dqkw4-" Nov 5 16:03:56.839749 containerd[1899]: 2025-11-05 16:03:54.186 [INFO][4638] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34" Namespace="calico-system" Pod="whisker-c75ccf967-dqkw4" WorkloadEndpoint="ip--172--31--16--11-k8s-whisker--c75ccf967--dqkw4-eth0" Nov 5 16:03:56.839749 containerd[1899]: 2025-11-05 16:03:56.486 [INFO][4646] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34" HandleID="k8s-pod-network.3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34" Workload="ip--172--31--16--11-k8s-whisker--c75ccf967--dqkw4-eth0" Nov 5 16:03:56.841950 containerd[1899]: 2025-11-05 16:03:56.489 [INFO][4646] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34" HandleID="k8s-pod-network.3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34" Workload="ip--172--31--16--11-k8s-whisker--c75ccf967--dqkw4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000269e00), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-11", "pod":"whisker-c75ccf967-dqkw4", "timestamp":"2025-11-05 16:03:56.486635515 +0000 UTC"}, Hostname:"ip-172-31-16-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:03:56.841950 containerd[1899]: 2025-11-05 16:03:56.489 [INFO][4646] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:03:56.841950 containerd[1899]: 2025-11-05 16:03:56.652 [INFO][4646] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:03:56.841950 containerd[1899]: 2025-11-05 16:03:56.653 [INFO][4646] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-11' Nov 5 16:03:56.841950 containerd[1899]: 2025-11-05 16:03:56.671 [INFO][4646] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34" host="ip-172-31-16-11" Nov 5 16:03:56.841950 containerd[1899]: 2025-11-05 16:03:56.713 [INFO][4646] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-11" Nov 5 16:03:56.841950 containerd[1899]: 2025-11-05 16:03:56.726 [INFO][4646] ipam/ipam.go 511: Trying affinity for 192.168.109.0/26 host="ip-172-31-16-11" Nov 5 16:03:56.841950 containerd[1899]: 2025-11-05 16:03:56.730 [INFO][4646] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.0/26 host="ip-172-31-16-11" Nov 5 16:03:56.841950 containerd[1899]: 2025-11-05 16:03:56.734 [INFO][4646] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.0/26 host="ip-172-31-16-11" Nov 5 16:03:56.841950 containerd[1899]: 2025-11-05 16:03:56.734 [INFO][4646] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.109.0/26 handle="k8s-pod-network.3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34" host="ip-172-31-16-11" Nov 5 16:03:56.845054 containerd[1899]: 2025-11-05 16:03:56.736 [INFO][4646] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34 Nov 5 16:03:56.845054 containerd[1899]: 2025-11-05 16:03:56.757 [INFO][4646] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.109.0/26 handle="k8s-pod-network.3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34" host="ip-172-31-16-11" Nov 5 16:03:56.845054 containerd[1899]: 2025-11-05 16:03:56.765 [INFO][4646] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.109.2/26] block=192.168.109.0/26 handle="k8s-pod-network.3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34" host="ip-172-31-16-11" Nov 5 16:03:56.845054 containerd[1899]: 2025-11-05 16:03:56.765 [INFO][4646] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.2/26] handle="k8s-pod-network.3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34" host="ip-172-31-16-11" Nov 5 16:03:56.845054 containerd[1899]: 2025-11-05 16:03:56.765 [INFO][4646] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:03:56.845054 containerd[1899]: 2025-11-05 16:03:56.766 [INFO][4646] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.109.2/26] IPv6=[] ContainerID="3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34" HandleID="k8s-pod-network.3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34" Workload="ip--172--31--16--11-k8s-whisker--c75ccf967--dqkw4-eth0" Nov 5 16:03:56.845735 containerd[1899]: 2025-11-05 16:03:56.776 [INFO][4638] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34" Namespace="calico-system" Pod="whisker-c75ccf967-dqkw4" WorkloadEndpoint="ip--172--31--16--11-k8s-whisker--c75ccf967--dqkw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--11-k8s-whisker--c75ccf967--dqkw4-eth0", GenerateName:"whisker-c75ccf967-", Namespace:"calico-system", SelfLink:"", UID:"34aa5cb5-d018-431d-960a-4659dc21c0b7", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c75ccf967", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-11", ContainerID:"", Pod:"whisker-c75ccf967-dqkw4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.109.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali042dec9bc69", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:56.845735 containerd[1899]: 2025-11-05 16:03:56.777 [INFO][4638] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.2/32] ContainerID="3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34" Namespace="calico-system" Pod="whisker-c75ccf967-dqkw4" WorkloadEndpoint="ip--172--31--16--11-k8s-whisker--c75ccf967--dqkw4-eth0" Nov 5 16:03:56.845902 containerd[1899]: 2025-11-05 16:03:56.777 [INFO][4638] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali042dec9bc69 ContainerID="3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34" Namespace="calico-system" Pod="whisker-c75ccf967-dqkw4" WorkloadEndpoint="ip--172--31--16--11-k8s-whisker--c75ccf967--dqkw4-eth0" Nov 5 16:03:56.845902 containerd[1899]: 2025-11-05 16:03:56.787 [INFO][4638] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34" Namespace="calico-system" Pod="whisker-c75ccf967-dqkw4" WorkloadEndpoint="ip--172--31--16--11-k8s-whisker--c75ccf967--dqkw4-eth0" Nov 5 16:03:56.846798 containerd[1899]: 2025-11-05 16:03:56.790 [INFO][4638] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34" Namespace="calico-system" Pod="whisker-c75ccf967-dqkw4" WorkloadEndpoint="ip--172--31--16--11-k8s-whisker--c75ccf967--dqkw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--11-k8s-whisker--c75ccf967--dqkw4-eth0", GenerateName:"whisker-c75ccf967-", Namespace:"calico-system", SelfLink:"", UID:"34aa5cb5-d018-431d-960a-4659dc21c0b7", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c75ccf967", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-11", ContainerID:"3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34", Pod:"whisker-c75ccf967-dqkw4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.109.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali042dec9bc69", MAC:"f2:f8:66:19:17:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:56.846953 containerd[1899]: 2025-11-05 16:03:56.825 [INFO][4638] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34" Namespace="calico-system" Pod="whisker-c75ccf967-dqkw4" WorkloadEndpoint="ip--172--31--16--11-k8s-whisker--c75ccf967--dqkw4-eth0" Nov 5 16:03:56.935169 systemd-networkd[1473]: calia20e69b717f: Link UP Nov 5 16:03:56.943686 systemd-networkd[1473]: calia20e69b717f: Gained carrier Nov 5 16:03:57.010020 containerd[1899]: 2025-11-05 16:03:53.097 [INFO][4571] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 16:03:57.010020 containerd[1899]: 2025-11-05 16:03:53.418 [INFO][4571] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--11-k8s-calico--kube--controllers--5f9c9c664f--fhtxd-eth0 calico-kube-controllers-5f9c9c664f- calico-system 5b76ecda-67c8-4ccb-b2a9-6e4178612c50 862 0 2025-11-05 16:03:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f9c9c664f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-16-11 calico-kube-controllers-5f9c9c664f-fhtxd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia20e69b717f [] [] }} ContainerID="0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b" Namespace="calico-system" Pod="calico-kube-controllers-5f9c9c664f-fhtxd" WorkloadEndpoint="ip--172--31--16--11-k8s-calico--kube--controllers--5f9c9c664f--fhtxd-" Nov 5 16:03:57.010020 containerd[1899]: 2025-11-05 16:03:53.418 [INFO][4571] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b" Namespace="calico-system" Pod="calico-kube-controllers-5f9c9c664f-fhtxd" WorkloadEndpoint="ip--172--31--16--11-k8s-calico--kube--controllers--5f9c9c664f--fhtxd-eth0" Nov 5 16:03:57.010020 containerd[1899]: 2025-11-05 16:03:56.484 [INFO][4604] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b" HandleID="k8s-pod-network.0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b" Workload="ip--172--31--16--11-k8s-calico--kube--controllers--5f9c9c664f--fhtxd-eth0" Nov 5 16:03:57.010389 containerd[1899]: 2025-11-05 16:03:56.487 [INFO][4604] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b" HandleID="k8s-pod-network.0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b" Workload="ip--172--31--16--11-k8s-calico--kube--controllers--5f9c9c664f--fhtxd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f230), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-11", "pod":"calico-kube-controllers-5f9c9c664f-fhtxd", "timestamp":"2025-11-05 16:03:56.484092829 +0000 UTC"}, Hostname:"ip-172-31-16-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:03:57.010389 containerd[1899]: 2025-11-05 16:03:56.487 [INFO][4604] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:03:57.010389 containerd[1899]: 2025-11-05 16:03:56.766 [INFO][4604] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:03:57.010389 containerd[1899]: 2025-11-05 16:03:56.767 [INFO][4604] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-11' Nov 5 16:03:57.010389 containerd[1899]: 2025-11-05 16:03:56.790 [INFO][4604] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b" host="ip-172-31-16-11" Nov 5 16:03:57.010389 containerd[1899]: 2025-11-05 16:03:56.818 [INFO][4604] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-11" Nov 5 16:03:57.010389 containerd[1899]: 2025-11-05 16:03:56.836 [INFO][4604] ipam/ipam.go 511: Trying affinity for 192.168.109.0/26 host="ip-172-31-16-11" Nov 5 16:03:57.010389 containerd[1899]: 2025-11-05 16:03:56.841 [INFO][4604] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.0/26 host="ip-172-31-16-11" Nov 5 16:03:57.010389 containerd[1899]: 2025-11-05 16:03:56.849 [INFO][4604] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.0/26 host="ip-172-31-16-11" Nov 5 16:03:57.010867 containerd[1899]: 2025-11-05 16:03:56.849 [INFO][4604] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.109.0/26 handle="k8s-pod-network.0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b" host="ip-172-31-16-11" Nov 5 16:03:57.010867 containerd[1899]: 2025-11-05 16:03:56.854 [INFO][4604] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b Nov 5 16:03:57.010867 containerd[1899]: 2025-11-05 16:03:56.875 [INFO][4604] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.109.0/26 handle="k8s-pod-network.0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b" host="ip-172-31-16-11" Nov 5 16:03:57.010867 containerd[1899]: 2025-11-05 16:03:56.887 [INFO][4604] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.109.3/26] block=192.168.109.0/26 handle="k8s-pod-network.0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b" host="ip-172-31-16-11" Nov 5 16:03:57.010867 containerd[1899]: 2025-11-05 16:03:56.887 [INFO][4604] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.3/26] handle="k8s-pod-network.0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b" host="ip-172-31-16-11" Nov 5 16:03:57.010867 containerd[1899]: 2025-11-05 16:03:56.887 [INFO][4604] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:03:57.010867 containerd[1899]: 2025-11-05 16:03:56.887 [INFO][4604] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.109.3/26] IPv6=[] ContainerID="0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b" HandleID="k8s-pod-network.0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b" Workload="ip--172--31--16--11-k8s-calico--kube--controllers--5f9c9c664f--fhtxd-eth0" Nov 5 16:03:57.012282 containerd[1899]: 2025-11-05 16:03:56.912 [INFO][4571] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b" Namespace="calico-system" Pod="calico-kube-controllers-5f9c9c664f-fhtxd" WorkloadEndpoint="ip--172--31--16--11-k8s-calico--kube--controllers--5f9c9c664f--fhtxd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--11-k8s-calico--kube--controllers--5f9c9c664f--fhtxd-eth0", GenerateName:"calico-kube-controllers-5f9c9c664f-", Namespace:"calico-system", SelfLink:"", UID:"5b76ecda-67c8-4ccb-b2a9-6e4178612c50", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f9c9c664f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-11", ContainerID:"", Pod:"calico-kube-controllers-5f9c9c664f-fhtxd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia20e69b717f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:57.012401 containerd[1899]: 2025-11-05 16:03:56.912 [INFO][4571] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.3/32] ContainerID="0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b" Namespace="calico-system" Pod="calico-kube-controllers-5f9c9c664f-fhtxd" WorkloadEndpoint="ip--172--31--16--11-k8s-calico--kube--controllers--5f9c9c664f--fhtxd-eth0" Nov 5 16:03:57.012401 containerd[1899]: 2025-11-05 16:03:56.912 [INFO][4571] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia20e69b717f ContainerID="0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b" Namespace="calico-system" Pod="calico-kube-controllers-5f9c9c664f-fhtxd" WorkloadEndpoint="ip--172--31--16--11-k8s-calico--kube--controllers--5f9c9c664f--fhtxd-eth0" Nov 5 16:03:57.012401 containerd[1899]: 2025-11-05 16:03:56.956 [INFO][4571] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b" Namespace="calico-system" Pod="calico-kube-controllers-5f9c9c664f-fhtxd" WorkloadEndpoint="ip--172--31--16--11-k8s-calico--kube--controllers--5f9c9c664f--fhtxd-eth0" Nov 5 16:03:57.012533 containerd[1899]: 2025-11-05 16:03:56.961 [INFO][4571] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b" Namespace="calico-system" Pod="calico-kube-controllers-5f9c9c664f-fhtxd" WorkloadEndpoint="ip--172--31--16--11-k8s-calico--kube--controllers--5f9c9c664f--fhtxd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--11-k8s-calico--kube--controllers--5f9c9c664f--fhtxd-eth0", GenerateName:"calico-kube-controllers-5f9c9c664f-", Namespace:"calico-system", SelfLink:"", UID:"5b76ecda-67c8-4ccb-b2a9-6e4178612c50", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f9c9c664f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-11", ContainerID:"0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b", Pod:"calico-kube-controllers-5f9c9c664f-fhtxd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia20e69b717f", MAC:"4a:48:fa:57:02:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:57.012635 containerd[1899]: 2025-11-05 16:03:56.994 [INFO][4571] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b" Namespace="calico-system" Pod="calico-kube-controllers-5f9c9c664f-fhtxd" WorkloadEndpoint="ip--172--31--16--11-k8s-calico--kube--controllers--5f9c9c664f--fhtxd-eth0" Nov 5 16:03:57.124227 systemd-networkd[1473]: cali9c935ab427d: Link UP Nov 5 16:03:57.125961 systemd-networkd[1473]: cali9c935ab427d: Gained carrier Nov 5 16:03:57.145419 containerd[1899]: time="2025-11-05T16:03:57.145343642Z" level=info msg="connecting to shim cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d" address="unix:///run/containerd/s/bd775be5fe624ccafecac44410dd72f66a772ede5eea077f47d9817328c133fc" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:57.162858 containerd[1899]: time="2025-11-05T16:03:57.160943390Z" level=info msg="connecting to shim 0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b" address="unix:///run/containerd/s/cad2fa4634750112c6bd74da37da5a68b732d2266e6ac0856301dc5d9cc8b65d" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:57.162858 containerd[1899]: 2025-11-05 16:03:53.083 [INFO][4577] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 16:03:57.162858 containerd[1899]: 2025-11-05 16:03:53.421 [INFO][4577] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--q5zql-eth0 calico-apiserver-8fffdb464- calico-apiserver 436b2852-bb09-4690-8210-c17e2fe57e96 864 0 2025-11-05 16:03:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8fffdb464 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-11 calico-apiserver-8fffdb464-q5zql eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9c935ab427d [] [] }} ContainerID="6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d" Namespace="calico-apiserver" Pod="calico-apiserver-8fffdb464-q5zql" WorkloadEndpoint="ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--q5zql-" Nov 5 16:03:57.162858 containerd[1899]: 2025-11-05 16:03:53.421 [INFO][4577] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d" Namespace="calico-apiserver" Pod="calico-apiserver-8fffdb464-q5zql" WorkloadEndpoint="ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--q5zql-eth0" Nov 5 16:03:57.163258 containerd[1899]: 2025-11-05 16:03:56.485 [INFO][4606] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d" HandleID="k8s-pod-network.6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d" Workload="ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--q5zql-eth0" Nov 5 16:03:57.163258 containerd[1899]: 2025-11-05 16:03:56.487 [INFO][4606] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d" HandleID="k8s-pod-network.6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d" Workload="ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--q5zql-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103b40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-11", "pod":"calico-apiserver-8fffdb464-q5zql", "timestamp":"2025-11-05 16:03:56.485619548 +0000 UTC"}, Hostname:"ip-172-31-16-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:03:57.163258 containerd[1899]: 2025-11-05 16:03:56.487 [INFO][4606] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:03:57.163258 containerd[1899]: 2025-11-05 16:03:56.887 [INFO][4606] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:03:57.163258 containerd[1899]: 2025-11-05 16:03:56.888 [INFO][4606] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-11' Nov 5 16:03:57.163258 containerd[1899]: 2025-11-05 16:03:56.915 [INFO][4606] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d" host="ip-172-31-16-11" Nov 5 16:03:57.163258 containerd[1899]: 2025-11-05 16:03:56.960 [INFO][4606] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-11" Nov 5 16:03:57.163258 containerd[1899]: 2025-11-05 16:03:56.978 [INFO][4606] ipam/ipam.go 511: Trying affinity for 192.168.109.0/26 host="ip-172-31-16-11" Nov 5 16:03:57.163258 containerd[1899]: 2025-11-05 16:03:56.998 [INFO][4606] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.0/26 host="ip-172-31-16-11" Nov 5 16:03:57.164139 containerd[1899]: 2025-11-05 16:03:57.022 [INFO][4606] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.0/26 host="ip-172-31-16-11" Nov 5 16:03:57.164139 containerd[1899]: 2025-11-05 16:03:57.030 [INFO][4606] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.109.0/26 handle="k8s-pod-network.6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d" host="ip-172-31-16-11" Nov 5 16:03:57.164139 containerd[1899]: 2025-11-05 16:03:57.039 [INFO][4606] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d Nov 5 16:03:57.164139 containerd[1899]: 2025-11-05 16:03:57.054 [INFO][4606] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.109.0/26 handle="k8s-pod-network.6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d" host="ip-172-31-16-11" Nov 5 16:03:57.164139 containerd[1899]: 2025-11-05 16:03:57.077 [INFO][4606] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.109.4/26] block=192.168.109.0/26 handle="k8s-pod-network.6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d" host="ip-172-31-16-11" Nov 5 16:03:57.164139 containerd[1899]: 2025-11-05 16:03:57.077 [INFO][4606] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.4/26] handle="k8s-pod-network.6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d" host="ip-172-31-16-11" Nov 5 16:03:57.164139 containerd[1899]: 2025-11-05 16:03:57.077 [INFO][4606] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:03:57.164139 containerd[1899]: 2025-11-05 16:03:57.077 [INFO][4606] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.109.4/26] IPv6=[] ContainerID="6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d" HandleID="k8s-pod-network.6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d" Workload="ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--q5zql-eth0" Nov 5 16:03:57.164482 containerd[1899]: 2025-11-05 16:03:57.103 [INFO][4577] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d" Namespace="calico-apiserver" Pod="calico-apiserver-8fffdb464-q5zql" WorkloadEndpoint="ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--q5zql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--q5zql-eth0", GenerateName:"calico-apiserver-8fffdb464-", Namespace:"calico-apiserver", SelfLink:"", UID:"436b2852-bb09-4690-8210-c17e2fe57e96", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8fffdb464", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-11", ContainerID:"", Pod:"calico-apiserver-8fffdb464-q5zql", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9c935ab427d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:57.164596 containerd[1899]: 2025-11-05 16:03:57.104 [INFO][4577] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.4/32] ContainerID="6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d" Namespace="calico-apiserver" Pod="calico-apiserver-8fffdb464-q5zql" WorkloadEndpoint="ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--q5zql-eth0" Nov 5 16:03:57.164596 containerd[1899]: 2025-11-05 16:03:57.104 [INFO][4577] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c935ab427d ContainerID="6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d" Namespace="calico-apiserver" Pod="calico-apiserver-8fffdb464-q5zql" WorkloadEndpoint="ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--q5zql-eth0" Nov 5 16:03:57.164596 containerd[1899]: 2025-11-05 16:03:57.126 [INFO][4577] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d" Namespace="calico-apiserver" Pod="calico-apiserver-8fffdb464-q5zql" WorkloadEndpoint="ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--q5zql-eth0" Nov 5 16:03:57.164734 containerd[1899]: 2025-11-05 16:03:57.129 [INFO][4577] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d" Namespace="calico-apiserver" Pod="calico-apiserver-8fffdb464-q5zql" WorkloadEndpoint="ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--q5zql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--q5zql-eth0", GenerateName:"calico-apiserver-8fffdb464-", Namespace:"calico-apiserver", SelfLink:"", UID:"436b2852-bb09-4690-8210-c17e2fe57e96", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8fffdb464", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-11", ContainerID:"6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d", Pod:"calico-apiserver-8fffdb464-q5zql", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9c935ab427d", MAC:"ae:49:12:29:c6:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:57.164834 containerd[1899]: 2025-11-05 16:03:57.155 [INFO][4577] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d" Namespace="calico-apiserver" Pod="calico-apiserver-8fffdb464-q5zql" WorkloadEndpoint="ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--q5zql-eth0" Nov 5 16:03:57.181274 containerd[1899]: time="2025-11-05T16:03:57.181163335Z" level=info msg="connecting to shim 3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34" address="unix:///run/containerd/s/5a1c1e8fe7bf57c09d20b8a0f5e4c6192b6d1f2099147172fbd15cc9404e2892" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:57.185385 systemd-networkd[1473]: vxlan.calico: Gained IPv6LL Nov 5 16:03:57.295256 systemd[1]: Started cri-containerd-cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d.scope - libcontainer container cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d. Nov 5 16:03:57.367581 containerd[1899]: time="2025-11-05T16:03:57.367532776Z" level=info msg="connecting to shim 6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d" address="unix:///run/containerd/s/9f7709a396ed2f449330f29dc53279916f2ceb2160654d5ec176c720dfbd2385" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:57.375926 systemd[1]: Started cri-containerd-0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b.scope - libcontainer container 0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b. Nov 5 16:03:57.401385 systemd[1]: Started cri-containerd-3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34.scope - libcontainer container 3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34. Nov 5 16:03:57.486392 systemd[1]: Started cri-containerd-6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d.scope - libcontainer container 6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d. Nov 5 16:03:57.532322 containerd[1899]: time="2025-11-05T16:03:57.532163882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7k4x5,Uid:d0a5c89c-b602-442e-811b-c3720b9add41,Namespace:calico-system,Attempt:0,} returns sandbox id \"cba9412822e1647627abe6ddbf5b92177daa87b267ec35d208ee11be74e4d95d\"" Nov 5 16:03:57.563259 containerd[1899]: time="2025-11-05T16:03:57.563217317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 16:03:57.586347 containerd[1899]: time="2025-11-05T16:03:57.586237721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c75ccf967-dqkw4,Uid:34aa5cb5-d018-431d-960a-4659dc21c0b7,Namespace:calico-system,Attempt:0,} returns sandbox id \"3eeef3ef373a9ec12b32428fbbf3ee168d5dcb5ba0bc2b88063e4c1d23f6dc34\"" Nov 5 16:03:57.652540 containerd[1899]: time="2025-11-05T16:03:57.652441876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f9c9c664f-fhtxd,Uid:5b76ecda-67c8-4ccb-b2a9-6e4178612c50,Namespace:calico-system,Attempt:0,} returns sandbox id \"0056045d15c1196460047063fa8906b609733993ca517c0094fe6f5807ee456b\"" Nov 5 16:03:57.685877 containerd[1899]: time="2025-11-05T16:03:57.685793456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8fffdb464-q5zql,Uid:436b2852-bb09-4690-8210-c17e2fe57e96,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6771c7aaf9a0f86a6e6995a3412ec5bd0ea3c19cebe44dbe5d6f24ba097b9d2d\"" Nov 5 16:03:57.727609 systemd-networkd[1473]: cali732c8362e9c: Link UP Nov 5 16:03:57.729119 systemd-networkd[1473]: cali732c8362e9c: Gained carrier Nov 5 16:03:57.762064 containerd[1899]: 2025-11-05 16:03:57.409 [INFO][4919] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--mjcqs-eth0 calico-apiserver-8fffdb464- calico-apiserver 96835183-cb2e-4158-994a-2b18537288b4 865 0 2025-11-05 16:03:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8fffdb464 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-11 calico-apiserver-8fffdb464-mjcqs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali732c8362e9c [] [] }} ContainerID="144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9" Namespace="calico-apiserver" Pod="calico-apiserver-8fffdb464-mjcqs" WorkloadEndpoint="ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--mjcqs-" Nov 5 16:03:57.762064 containerd[1899]: 2025-11-05 16:03:57.410 [INFO][4919] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9" Namespace="calico-apiserver" Pod="calico-apiserver-8fffdb464-mjcqs" WorkloadEndpoint="ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--mjcqs-eth0" Nov 5 16:03:57.762064 containerd[1899]: 2025-11-05 16:03:57.635 [INFO][5049] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9" HandleID="k8s-pod-network.144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9" Workload="ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--mjcqs-eth0" Nov 5 16:03:57.764520 containerd[1899]: 2025-11-05 16:03:57.635 [INFO][5049] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9" HandleID="k8s-pod-network.144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9" Workload="ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--mjcqs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001ffac0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-11", "pod":"calico-apiserver-8fffdb464-mjcqs", "timestamp":"2025-11-05 16:03:57.635240251 +0000 UTC"}, Hostname:"ip-172-31-16-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:03:57.764520 containerd[1899]: 2025-11-05 16:03:57.635 [INFO][5049] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:03:57.764520 containerd[1899]: 2025-11-05 16:03:57.637 [INFO][5049] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:03:57.764520 containerd[1899]: 2025-11-05 16:03:57.637 [INFO][5049] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-11' Nov 5 16:03:57.764520 containerd[1899]: 2025-11-05 16:03:57.656 [INFO][5049] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9" host="ip-172-31-16-11" Nov 5 16:03:57.764520 containerd[1899]: 2025-11-05 16:03:57.675 [INFO][5049] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-11" Nov 5 16:03:57.764520 containerd[1899]: 2025-11-05 16:03:57.684 [INFO][5049] ipam/ipam.go 511: Trying affinity for 192.168.109.0/26 host="ip-172-31-16-11" Nov 5 16:03:57.764520 containerd[1899]: 2025-11-05 16:03:57.688 [INFO][5049] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.0/26 host="ip-172-31-16-11" Nov 5 16:03:57.764520 containerd[1899]: 2025-11-05 16:03:57.693 [INFO][5049] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.0/26 host="ip-172-31-16-11" Nov 5 16:03:57.765908 containerd[1899]: 2025-11-05 16:03:57.693 [INFO][5049] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.109.0/26 handle="k8s-pod-network.144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9" host="ip-172-31-16-11" Nov 5 16:03:57.765908 containerd[1899]: 2025-11-05 16:03:57.696 [INFO][5049] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9 Nov 5 16:03:57.765908 containerd[1899]: 2025-11-05 16:03:57.708 [INFO][5049] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.109.0/26 handle="k8s-pod-network.144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9" host="ip-172-31-16-11" Nov 5 16:03:57.765908 containerd[1899]: 2025-11-05 16:03:57.719 [INFO][5049] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.109.5/26] block=192.168.109.0/26 handle="k8s-pod-network.144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9" host="ip-172-31-16-11" Nov 5 16:03:57.765908 containerd[1899]: 2025-11-05 16:03:57.719 [INFO][5049] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.5/26] handle="k8s-pod-network.144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9" host="ip-172-31-16-11" Nov 5 16:03:57.765908 containerd[1899]: 2025-11-05 16:03:57.719 [INFO][5049] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:03:57.765908 containerd[1899]: 2025-11-05 16:03:57.719 [INFO][5049] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.109.5/26] IPv6=[] ContainerID="144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9" HandleID="k8s-pod-network.144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9" Workload="ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--mjcqs-eth0" Nov 5 16:03:57.766413 containerd[1899]: 2025-11-05 16:03:57.722 [INFO][4919] cni-plugin/k8s.go 418: Populated endpoint ContainerID="144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9" Namespace="calico-apiserver" Pod="calico-apiserver-8fffdb464-mjcqs" WorkloadEndpoint="ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--mjcqs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--mjcqs-eth0", GenerateName:"calico-apiserver-8fffdb464-", Namespace:"calico-apiserver", SelfLink:"", UID:"96835183-cb2e-4158-994a-2b18537288b4", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8fffdb464", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-11", ContainerID:"", Pod:"calico-apiserver-8fffdb464-mjcqs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali732c8362e9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:57.767277 containerd[1899]: 2025-11-05 16:03:57.722 [INFO][4919] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.5/32] ContainerID="144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9" Namespace="calico-apiserver" Pod="calico-apiserver-8fffdb464-mjcqs" WorkloadEndpoint="ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--mjcqs-eth0" Nov 5 16:03:57.767277 containerd[1899]: 2025-11-05 16:03:57.722 [INFO][4919] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali732c8362e9c ContainerID="144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9" Namespace="calico-apiserver" Pod="calico-apiserver-8fffdb464-mjcqs" WorkloadEndpoint="ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--mjcqs-eth0" Nov 5 16:03:57.767277 containerd[1899]: 2025-11-05 16:03:57.727 [INFO][4919] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9" Namespace="calico-apiserver" Pod="calico-apiserver-8fffdb464-mjcqs" WorkloadEndpoint="ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--mjcqs-eth0" Nov 5 16:03:57.767397 containerd[1899]: 2025-11-05 16:03:57.733 [INFO][4919] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9" Namespace="calico-apiserver" Pod="calico-apiserver-8fffdb464-mjcqs" WorkloadEndpoint="ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--mjcqs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--mjcqs-eth0", GenerateName:"calico-apiserver-8fffdb464-", Namespace:"calico-apiserver", SelfLink:"", UID:"96835183-cb2e-4158-994a-2b18537288b4", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8fffdb464", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-11", ContainerID:"144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9", Pod:"calico-apiserver-8fffdb464-mjcqs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali732c8362e9c", MAC:"de:6d:e0:cd:cb:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:03:57.767553 containerd[1899]: 2025-11-05 16:03:57.750 [INFO][4919] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9" Namespace="calico-apiserver" Pod="calico-apiserver-8fffdb464-mjcqs" WorkloadEndpoint="ip--172--31--16--11-k8s-calico--apiserver--8fffdb464--mjcqs-eth0" Nov 5 16:03:57.804971 containerd[1899]: time="2025-11-05T16:03:57.804868825Z" level=info msg="connecting to shim 144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9" address="unix:///run/containerd/s/666a28d4ad0b55b4199b917bae220ca5efa22ab89ef334ecf1760a84bb5614a6" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:03:57.837288 systemd[1]: Started cri-containerd-144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9.scope - libcontainer container 144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9. Nov 5 16:03:57.867998 containerd[1899]: time="2025-11-05T16:03:57.867123672Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:57.869348 containerd[1899]: time="2025-11-05T16:03:57.869212890Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 16:03:57.869348 containerd[1899]: time="2025-11-05T16:03:57.869315090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 16:03:57.887053 kubelet[3198]: E1105 16:03:57.879784 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:03:57.910682 kubelet[3198]: E1105 16:03:57.910430 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:03:57.911733 kubelet[3198]: E1105 16:03:57.911397 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7k4x5_calico-system(d0a5c89c-b602-442e-811b-c3720b9add41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:57.913934 containerd[1899]: time="2025-11-05T16:03:57.913615477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 16:03:57.920332 containerd[1899]: time="2025-11-05T16:03:57.920274036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8fffdb464-mjcqs,Uid:96835183-cb2e-4158-994a-2b18537288b4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"144fd070baaa3630c1e26c7c7544f2a5e84b69e6627b6e9bfe7fcd9e7f42afb9\"" Nov 5 16:03:58.081248 systemd-networkd[1473]: calia20e69b717f: Gained IPv6LL Nov 5 16:03:58.224070 containerd[1899]: time="2025-11-05T16:03:58.224015531Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:58.226856 containerd[1899]: time="2025-11-05T16:03:58.226651576Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 16:03:58.226856 containerd[1899]: time="2025-11-05T16:03:58.226740455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 16:03:58.228094 kubelet[3198]: E1105 16:03:58.228033 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:03:58.228094 kubelet[3198]: E1105 16:03:58.228097 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:03:58.228947 containerd[1899]: time="2025-11-05T16:03:58.228912818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 16:03:58.237420 kubelet[3198]: E1105 16:03:58.237366 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-c75ccf967-dqkw4_calico-system(34aa5cb5-d018-431d-960a-4659dc21c0b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:58.337718 systemd-networkd[1473]: calicf906d19911: Gained IPv6LL Nov 5 16:03:58.593382 systemd-networkd[1473]: cali042dec9bc69: Gained IPv6LL Nov 5 16:03:58.689329 containerd[1899]: time="2025-11-05T16:03:58.689263108Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:58.691688 containerd[1899]: time="2025-11-05T16:03:58.691488947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 16:03:58.692456 kubelet[3198]: E1105 16:03:58.692022 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:03:58.692456 kubelet[3198]: E1105 16:03:58.692080 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:03:58.692456 kubelet[3198]: E1105 16:03:58.692310 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5f9c9c664f-fhtxd_calico-system(5b76ecda-67c8-4ccb-b2a9-6e4178612c50): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:58.693038 containerd[1899]: time="2025-11-05T16:03:58.692968082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:03:58.702085 containerd[1899]: time="2025-11-05T16:03:58.702029981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 16:03:58.703336 kubelet[3198]: E1105 16:03:58.703213 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f9c9c664f-fhtxd" podUID="5b76ecda-67c8-4ccb-b2a9-6e4178612c50" Nov 5 16:03:59.028080 containerd[1899]: time="2025-11-05T16:03:59.027948523Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:59.030356 containerd[1899]: time="2025-11-05T16:03:59.030283757Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:03:59.030356 containerd[1899]: time="2025-11-05T16:03:59.030309091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:03:59.030646 kubelet[3198]: E1105 16:03:59.030582 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:03:59.031955 kubelet[3198]: E1105 16:03:59.030655 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:03:59.031955 kubelet[3198]: E1105 16:03:59.030879 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8fffdb464-q5zql_calico-apiserver(436b2852-bb09-4690-8210-c17e2fe57e96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:59.031955 kubelet[3198]: E1105 16:03:59.030926 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fffdb464-q5zql" podUID="436b2852-bb09-4690-8210-c17e2fe57e96" Nov 5 16:03:59.032187 containerd[1899]: time="2025-11-05T16:03:59.031539284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 16:03:59.105643 systemd-networkd[1473]: cali9c935ab427d: Gained IPv6LL Nov 5 16:03:59.318082 containerd[1899]: time="2025-11-05T16:03:59.317894083Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:59.320107 containerd[1899]: time="2025-11-05T16:03:59.320049470Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 16:03:59.320281 containerd[1899]: time="2025-11-05T16:03:59.320081909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 16:03:59.320437 kubelet[3198]: E1105 16:03:59.320381 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:03:59.320437 kubelet[3198]: E1105 16:03:59.320441 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:03:59.320849 containerd[1899]: time="2025-11-05T16:03:59.320794452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:03:59.325095 kubelet[3198]: E1105 16:03:59.324949 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7k4x5_calico-system(d0a5c89c-b602-442e-811b-c3720b9add41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:59.326043 kubelet[3198]: E1105 16:03:59.325133 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7k4x5" podUID="d0a5c89c-b602-442e-811b-c3720b9add41" Nov 5 16:03:59.554170 systemd-networkd[1473]: cali732c8362e9c: Gained IPv6LL Nov 5 16:03:59.562584 kubelet[3198]: E1105 16:03:59.562529 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fffdb464-q5zql" podUID="436b2852-bb09-4690-8210-c17e2fe57e96" Nov 5 16:03:59.564769 kubelet[3198]: E1105 16:03:59.564644 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f9c9c664f-fhtxd" podUID="5b76ecda-67c8-4ccb-b2a9-6e4178612c50" Nov 5 16:03:59.565375 kubelet[3198]: E1105 16:03:59.565307 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7k4x5" podUID="d0a5c89c-b602-442e-811b-c3720b9add41" Nov 5 16:03:59.636531 containerd[1899]: time="2025-11-05T16:03:59.636443495Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:03:59.638651 containerd[1899]: time="2025-11-05T16:03:59.638591673Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:03:59.638785 containerd[1899]: time="2025-11-05T16:03:59.638679871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:03:59.638888 kubelet[3198]: E1105 16:03:59.638852 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:03:59.638956 kubelet[3198]: E1105 16:03:59.638894 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:03:59.639367 kubelet[3198]: E1105 16:03:59.639312 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8fffdb464-mjcqs_calico-apiserver(96835183-cb2e-4158-994a-2b18537288b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:03:59.639627 kubelet[3198]: E1105 16:03:59.639361 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fffdb464-mjcqs" podUID="96835183-cb2e-4158-994a-2b18537288b4" Nov 5 16:03:59.640439 containerd[1899]: time="2025-11-05T16:03:59.640372939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 16:04:00.064009 containerd[1899]: time="2025-11-05T16:04:00.063253706Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:00.098394 containerd[1899]: time="2025-11-05T16:04:00.098311245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 16:04:00.098394 containerd[1899]: time="2025-11-05T16:04:00.098364186Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 16:04:00.141035 kubelet[3198]: E1105 16:04:00.102351 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:04:00.141035 kubelet[3198]: E1105 16:04:00.102403 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:04:00.141035 kubelet[3198]: E1105 16:04:00.102489 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-c75ccf967-dqkw4_calico-system(34aa5cb5-d018-431d-960a-4659dc21c0b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:00.149043 kubelet[3198]: E1105 16:04:00.102542 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c75ccf967-dqkw4" podUID="34aa5cb5-d018-431d-960a-4659dc21c0b7" Nov 5 16:04:00.562709 kubelet[3198]: E1105 16:04:00.562643 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fffdb464-mjcqs" podUID="96835183-cb2e-4158-994a-2b18537288b4" Nov 5 16:04:00.568949 kubelet[3198]: E1105 16:04:00.568891 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c75ccf967-dqkw4" podUID="34aa5cb5-d018-431d-960a-4659dc21c0b7" Nov 5 16:04:01.939383 ntpd[1844]: Listen normally on 6 vxlan.calico 192.168.109.0:123 Nov 5 16:04:01.942722 ntpd[1844]: 5 Nov 16:04:01 ntpd[1844]: Listen normally on 6 vxlan.calico 192.168.109.0:123 Nov 5 16:04:01.942722 ntpd[1844]: 5 Nov 16:04:01 ntpd[1844]: Listen normally on 7 vxlan.calico [fe80::6446:44ff:fe47:338e%4]:123 Nov 5 16:04:01.942722 ntpd[1844]: 5 Nov 16:04:01 ntpd[1844]: Listen normally on 8 calicf906d19911 [fe80::ecee:eeff:feee:eeee%7]:123 Nov 5 16:04:01.942722 ntpd[1844]: 5 Nov 16:04:01 ntpd[1844]: Listen normally on 9 cali042dec9bc69 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 5 16:04:01.942722 ntpd[1844]: 5 Nov 16:04:01 ntpd[1844]: Listen normally on 10 calia20e69b717f [fe80::ecee:eeff:feee:eeee%9]:123 Nov 5 16:04:01.942722 ntpd[1844]: 5 Nov 16:04:01 ntpd[1844]: Listen normally on 11 cali9c935ab427d [fe80::ecee:eeff:feee:eeee%10]:123 Nov 5 16:04:01.942722 ntpd[1844]: 5 Nov 16:04:01 ntpd[1844]: Listen normally on 12 cali732c8362e9c [fe80::ecee:eeff:feee:eeee%11]:123 Nov 5 16:04:01.939462 ntpd[1844]: Listen normally on 7 vxlan.calico [fe80::6446:44ff:fe47:338e%4]:123 Nov 5 16:04:01.939496 ntpd[1844]: Listen normally on 8 calicf906d19911 [fe80::ecee:eeff:feee:eeee%7]:123 Nov 5 16:04:01.939522 ntpd[1844]: Listen normally on 9 cali042dec9bc69 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 5 16:04:01.939548 ntpd[1844]: Listen normally on 10 calia20e69b717f [fe80::ecee:eeff:feee:eeee%9]:123 Nov 5 16:04:01.939650 ntpd[1844]: Listen normally on 11 cali9c935ab427d [fe80::ecee:eeff:feee:eeee%10]:123 Nov 5 16:04:01.939680 ntpd[1844]: Listen normally on 12 cali732c8362e9c [fe80::ecee:eeff:feee:eeee%11]:123 Nov 5 16:04:03.008789 containerd[1899]: time="2025-11-05T16:04:03.008253649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-glcrv,Uid:094f6189-9d15-415e-a528-9777b0761bec,Namespace:kube-system,Attempt:0,}" Nov 5 16:04:03.313945 systemd-networkd[1473]: cali178a46e49ad: Link UP Nov 5 16:04:03.314382 systemd-networkd[1473]: cali178a46e49ad: Gained carrier Nov 5 16:04:03.324132 (udev-worker)[5182]: Network interface NamePolicy= disabled on kernel command line. Nov 5 16:04:03.346309 containerd[1899]: 2025-11-05 16:04:03.126 [INFO][5159] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--11-k8s-coredns--66bc5c9577--glcrv-eth0 coredns-66bc5c9577- kube-system 094f6189-9d15-415e-a528-9777b0761bec 856 0 2025-11-05 16:02:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-11 coredns-66bc5c9577-glcrv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali178a46e49ad [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112" Namespace="kube-system" Pod="coredns-66bc5c9577-glcrv" WorkloadEndpoint="ip--172--31--16--11-k8s-coredns--66bc5c9577--glcrv-" Nov 5 16:04:03.346309 containerd[1899]: 2025-11-05 16:04:03.128 [INFO][5159] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112" Namespace="kube-system" Pod="coredns-66bc5c9577-glcrv" WorkloadEndpoint="ip--172--31--16--11-k8s-coredns--66bc5c9577--glcrv-eth0" Nov 5 16:04:03.346309 containerd[1899]: 2025-11-05 16:04:03.201 [INFO][5173] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112" HandleID="k8s-pod-network.7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112" Workload="ip--172--31--16--11-k8s-coredns--66bc5c9577--glcrv-eth0" Nov 5 16:04:03.346607 containerd[1899]: 2025-11-05 16:04:03.202 [INFO][5173] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112" HandleID="k8s-pod-network.7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112" Workload="ip--172--31--16--11-k8s-coredns--66bc5c9577--glcrv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-11", "pod":"coredns-66bc5c9577-glcrv", "timestamp":"2025-11-05 16:04:03.201741596 +0000 UTC"}, Hostname:"ip-172-31-16-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:04:03.346607 containerd[1899]: 2025-11-05 16:04:03.202 [INFO][5173] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:04:03.346607 containerd[1899]: 2025-11-05 16:04:03.202 [INFO][5173] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:04:03.346607 containerd[1899]: 2025-11-05 16:04:03.202 [INFO][5173] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-11' Nov 5 16:04:03.346607 containerd[1899]: 2025-11-05 16:04:03.237 [INFO][5173] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112" host="ip-172-31-16-11" Nov 5 16:04:03.346607 containerd[1899]: 2025-11-05 16:04:03.250 [INFO][5173] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-11" Nov 5 16:04:03.346607 containerd[1899]: 2025-11-05 16:04:03.266 [INFO][5173] ipam/ipam.go 511: Trying affinity for 192.168.109.0/26 host="ip-172-31-16-11" Nov 5 16:04:03.346607 containerd[1899]: 2025-11-05 16:04:03.270 [INFO][5173] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.0/26 host="ip-172-31-16-11" Nov 5 16:04:03.346607 containerd[1899]: 2025-11-05 16:04:03.275 [INFO][5173] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.0/26 host="ip-172-31-16-11" Nov 5 16:04:03.346607 containerd[1899]: 2025-11-05 16:04:03.275 [INFO][5173] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.109.0/26 handle="k8s-pod-network.7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112" host="ip-172-31-16-11" Nov 5 16:04:03.349102 containerd[1899]: 2025-11-05 16:04:03.278 [INFO][5173] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112 Nov 5 16:04:03.349102 containerd[1899]: 2025-11-05 16:04:03.287 [INFO][5173] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.109.0/26 handle="k8s-pod-network.7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112" host="ip-172-31-16-11" Nov 5 16:04:03.349102 containerd[1899]: 2025-11-05 16:04:03.299 [INFO][5173] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.109.6/26] block=192.168.109.0/26 handle="k8s-pod-network.7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112" host="ip-172-31-16-11" Nov 5 16:04:03.349102 containerd[1899]: 2025-11-05 16:04:03.299 [INFO][5173] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.6/26] handle="k8s-pod-network.7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112" host="ip-172-31-16-11" Nov 5 16:04:03.349102 containerd[1899]: 2025-11-05 16:04:03.299 [INFO][5173] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:04:03.349102 containerd[1899]: 2025-11-05 16:04:03.299 [INFO][5173] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.109.6/26] IPv6=[] ContainerID="7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112" HandleID="k8s-pod-network.7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112" Workload="ip--172--31--16--11-k8s-coredns--66bc5c9577--glcrv-eth0" Nov 5 16:04:03.350324 containerd[1899]: 2025-11-05 16:04:03.304 [INFO][5159] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112" Namespace="kube-system" Pod="coredns-66bc5c9577-glcrv" WorkloadEndpoint="ip--172--31--16--11-k8s-coredns--66bc5c9577--glcrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--11-k8s-coredns--66bc5c9577--glcrv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"094f6189-9d15-415e-a528-9777b0761bec", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 2, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-11", ContainerID:"", Pod:"coredns-66bc5c9577-glcrv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali178a46e49ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:04:03.350324 containerd[1899]: 2025-11-05 16:04:03.304 [INFO][5159] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.6/32] ContainerID="7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112" Namespace="kube-system" Pod="coredns-66bc5c9577-glcrv" WorkloadEndpoint="ip--172--31--16--11-k8s-coredns--66bc5c9577--glcrv-eth0" Nov 5 16:04:03.350324 containerd[1899]: 2025-11-05 16:04:03.304 [INFO][5159] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali178a46e49ad ContainerID="7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112" Namespace="kube-system" Pod="coredns-66bc5c9577-glcrv" WorkloadEndpoint="ip--172--31--16--11-k8s-coredns--66bc5c9577--glcrv-eth0" Nov 5 16:04:03.350324 containerd[1899]: 2025-11-05 16:04:03.311 [INFO][5159] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112" Namespace="kube-system" Pod="coredns-66bc5c9577-glcrv" WorkloadEndpoint="ip--172--31--16--11-k8s-coredns--66bc5c9577--glcrv-eth0" Nov 5 16:04:03.350324 containerd[1899]: 2025-11-05 16:04:03.312 [INFO][5159] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112" Namespace="kube-system" Pod="coredns-66bc5c9577-glcrv" WorkloadEndpoint="ip--172--31--16--11-k8s-coredns--66bc5c9577--glcrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--11-k8s-coredns--66bc5c9577--glcrv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"094f6189-9d15-415e-a528-9777b0761bec", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 2, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-11", ContainerID:"7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112", Pod:"coredns-66bc5c9577-glcrv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali178a46e49ad", MAC:"5e:d0:03:70:8c:7f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:04:03.350324 containerd[1899]: 2025-11-05 16:04:03.334 [INFO][5159] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112" Namespace="kube-system" Pod="coredns-66bc5c9577-glcrv" WorkloadEndpoint="ip--172--31--16--11-k8s-coredns--66bc5c9577--glcrv-eth0" Nov 5 16:04:03.417204 containerd[1899]: time="2025-11-05T16:04:03.417140776Z" level=info msg="connecting to shim 7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112" address="unix:///run/containerd/s/5a68ab818189c585213ac781adecf7cd45e6a2f6ce21131af257c2b2ab2a27ca" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:04:03.509340 systemd[1]: Started cri-containerd-7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112.scope - libcontainer container 7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112. Nov 5 16:04:03.575802 containerd[1899]: time="2025-11-05T16:04:03.574728216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-glcrv,Uid:094f6189-9d15-415e-a528-9777b0761bec,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112\"" Nov 5 16:04:03.596257 containerd[1899]: time="2025-11-05T16:04:03.596206453Z" level=info msg="CreateContainer within sandbox \"7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 16:04:03.775895 containerd[1899]: time="2025-11-05T16:04:03.775318964Z" level=info msg="Container 941febbece4c3485c29d1feaeffdadeec104d860998f3b5835db886f49c2afc3: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:04:03.775505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1132128169.mount: Deactivated successfully. Nov 5 16:04:03.789003 containerd[1899]: time="2025-11-05T16:04:03.788935839Z" level=info msg="CreateContainer within sandbox \"7bf180eae7ba9a48f09e0a3a2f0506c16887ea6bff8e1d795a36114c80082112\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"941febbece4c3485c29d1feaeffdadeec104d860998f3b5835db886f49c2afc3\"" Nov 5 16:04:03.791020 containerd[1899]: time="2025-11-05T16:04:03.790771018Z" level=info msg="StartContainer for \"941febbece4c3485c29d1feaeffdadeec104d860998f3b5835db886f49c2afc3\"" Nov 5 16:04:03.792560 containerd[1899]: time="2025-11-05T16:04:03.792480381Z" level=info msg="connecting to shim 941febbece4c3485c29d1feaeffdadeec104d860998f3b5835db886f49c2afc3" address="unix:///run/containerd/s/5a68ab818189c585213ac781adecf7cd45e6a2f6ce21131af257c2b2ab2a27ca" protocol=ttrpc version=3 Nov 5 16:04:03.816235 systemd[1]: Started cri-containerd-941febbece4c3485c29d1feaeffdadeec104d860998f3b5835db886f49c2afc3.scope - libcontainer container 941febbece4c3485c29d1feaeffdadeec104d860998f3b5835db886f49c2afc3. Nov 5 16:04:03.905150 containerd[1899]: time="2025-11-05T16:04:03.905102567Z" level=info msg="StartContainer for \"941febbece4c3485c29d1feaeffdadeec104d860998f3b5835db886f49c2afc3\" returns successfully" Nov 5 16:04:04.717258 systemd[1]: Started sshd@7-172.31.16.11:22-139.178.68.195:57960.service - OpenSSH per-connection server daemon (139.178.68.195:57960). Nov 5 16:04:04.865660 systemd-networkd[1473]: cali178a46e49ad: Gained IPv6LL Nov 5 16:04:04.959678 sshd[5270]: Accepted publickey for core from 139.178.68.195 port 57960 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:04.962284 sshd-session[5270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:04.969482 systemd-logind[1855]: New session 8 of user core. Nov 5 16:04:04.974196 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 16:04:06.044298 containerd[1899]: time="2025-11-05T16:04:06.043247163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-qgjqk,Uid:c259a7b3-0c1e-4695-b558-e42d28fb4911,Namespace:calico-system,Attempt:0,}" Nov 5 16:04:06.197382 sshd[5276]: Connection closed by 139.178.68.195 port 57960 Nov 5 16:04:06.200305 sshd-session[5270]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:06.217673 systemd[1]: sshd@7-172.31.16.11:22-139.178.68.195:57960.service: Deactivated successfully. Nov 5 16:04:06.225701 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 16:04:06.229707 systemd-logind[1855]: Session 8 logged out. Waiting for processes to exit. Nov 5 16:04:06.237048 systemd-logind[1855]: Removed session 8. Nov 5 16:04:06.325397 systemd-networkd[1473]: calia9ba0776464: Link UP Nov 5 16:04:06.327040 systemd-networkd[1473]: calia9ba0776464: Gained carrier Nov 5 16:04:06.331221 (udev-worker)[5184]: Network interface NamePolicy= disabled on kernel command line. Nov 5 16:04:06.361933 kubelet[3198]: I1105 16:04:06.353615 3198 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-glcrv" podStartSLOduration=88.344923043 podStartE2EDuration="1m28.344923043s" podCreationTimestamp="2025-11-05 16:02:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:04:04.640569609 +0000 UTC m=+92.826625404" watchObservedRunningTime="2025-11-05 16:04:06.344923043 +0000 UTC m=+94.530978836" Nov 5 16:04:06.367876 containerd[1899]: 2025-11-05 16:04:06.160 [INFO][5286] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--11-k8s-goldmane--7c778bb748--qgjqk-eth0 goldmane-7c778bb748- calico-system c259a7b3-0c1e-4695-b558-e42d28fb4911 866 0 2025-11-05 16:03:23 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-16-11 goldmane-7c778bb748-qgjqk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia9ba0776464 [] [] }} ContainerID="1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a" Namespace="calico-system" Pod="goldmane-7c778bb748-qgjqk" WorkloadEndpoint="ip--172--31--16--11-k8s-goldmane--7c778bb748--qgjqk-" Nov 5 16:04:06.367876 containerd[1899]: 2025-11-05 16:04:06.161 [INFO][5286] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a" Namespace="calico-system" Pod="goldmane-7c778bb748-qgjqk" WorkloadEndpoint="ip--172--31--16--11-k8s-goldmane--7c778bb748--qgjqk-eth0" Nov 5 16:04:06.367876 containerd[1899]: 2025-11-05 16:04:06.240 [INFO][5299] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a" HandleID="k8s-pod-network.1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a" Workload="ip--172--31--16--11-k8s-goldmane--7c778bb748--qgjqk-eth0" Nov 5 16:04:06.367876 containerd[1899]: 2025-11-05 16:04:06.244 [INFO][5299] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a" HandleID="k8s-pod-network.1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a" Workload="ip--172--31--16--11-k8s-goldmane--7c778bb748--qgjqk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5800), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-11", "pod":"goldmane-7c778bb748-qgjqk", "timestamp":"2025-11-05 16:04:06.240703694 +0000 UTC"}, Hostname:"ip-172-31-16-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:04:06.367876 containerd[1899]: 2025-11-05 16:04:06.244 [INFO][5299] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:04:06.367876 containerd[1899]: 2025-11-05 16:04:06.244 [INFO][5299] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:04:06.367876 containerd[1899]: 2025-11-05 16:04:06.244 [INFO][5299] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-11' Nov 5 16:04:06.367876 containerd[1899]: 2025-11-05 16:04:06.260 [INFO][5299] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a" host="ip-172-31-16-11" Nov 5 16:04:06.367876 containerd[1899]: 2025-11-05 16:04:06.268 [INFO][5299] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-11" Nov 5 16:04:06.367876 containerd[1899]: 2025-11-05 16:04:06.280 [INFO][5299] ipam/ipam.go 511: Trying affinity for 192.168.109.0/26 host="ip-172-31-16-11" Nov 5 16:04:06.367876 containerd[1899]: 2025-11-05 16:04:06.284 [INFO][5299] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.0/26 host="ip-172-31-16-11" Nov 5 16:04:06.367876 containerd[1899]: 2025-11-05 16:04:06.291 [INFO][5299] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.0/26 host="ip-172-31-16-11" Nov 5 16:04:06.367876 containerd[1899]: 2025-11-05 16:04:06.291 [INFO][5299] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.109.0/26 handle="k8s-pod-network.1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a" host="ip-172-31-16-11" Nov 5 16:04:06.367876 containerd[1899]: 2025-11-05 16:04:06.294 [INFO][5299] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a Nov 5 16:04:06.367876 containerd[1899]: 2025-11-05 16:04:06.303 [INFO][5299] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.109.0/26 handle="k8s-pod-network.1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a" host="ip-172-31-16-11" Nov 5 16:04:06.367876 containerd[1899]: 2025-11-05 16:04:06.313 [INFO][5299] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.109.7/26] block=192.168.109.0/26 handle="k8s-pod-network.1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a" host="ip-172-31-16-11" Nov 5 16:04:06.367876 containerd[1899]: 2025-11-05 16:04:06.314 [INFO][5299] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.7/26] handle="k8s-pod-network.1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a" host="ip-172-31-16-11" Nov 5 16:04:06.367876 containerd[1899]: 2025-11-05 16:04:06.314 [INFO][5299] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:04:06.367876 containerd[1899]: 2025-11-05 16:04:06.314 [INFO][5299] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.109.7/26] IPv6=[] ContainerID="1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a" HandleID="k8s-pod-network.1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a" Workload="ip--172--31--16--11-k8s-goldmane--7c778bb748--qgjqk-eth0" Nov 5 16:04:06.371731 containerd[1899]: 2025-11-05 16:04:06.320 [INFO][5286] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a" Namespace="calico-system" Pod="goldmane-7c778bb748-qgjqk" WorkloadEndpoint="ip--172--31--16--11-k8s-goldmane--7c778bb748--qgjqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--11-k8s-goldmane--7c778bb748--qgjqk-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"c259a7b3-0c1e-4695-b558-e42d28fb4911", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-11", ContainerID:"", Pod:"goldmane-7c778bb748-qgjqk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia9ba0776464", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:04:06.371731 containerd[1899]: 2025-11-05 16:04:06.320 [INFO][5286] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.7/32] ContainerID="1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a" Namespace="calico-system" Pod="goldmane-7c778bb748-qgjqk" WorkloadEndpoint="ip--172--31--16--11-k8s-goldmane--7c778bb748--qgjqk-eth0" Nov 5 16:04:06.371731 containerd[1899]: 2025-11-05 16:04:06.320 [INFO][5286] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9ba0776464 ContainerID="1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a" Namespace="calico-system" Pod="goldmane-7c778bb748-qgjqk" WorkloadEndpoint="ip--172--31--16--11-k8s-goldmane--7c778bb748--qgjqk-eth0" Nov 5 16:04:06.371731 containerd[1899]: 2025-11-05 16:04:06.326 [INFO][5286] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a" Namespace="calico-system" Pod="goldmane-7c778bb748-qgjqk" WorkloadEndpoint="ip--172--31--16--11-k8s-goldmane--7c778bb748--qgjqk-eth0" Nov 5 16:04:06.371731 containerd[1899]: 2025-11-05 16:04:06.327 [INFO][5286] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a" Namespace="calico-system" Pod="goldmane-7c778bb748-qgjqk" WorkloadEndpoint="ip--172--31--16--11-k8s-goldmane--7c778bb748--qgjqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--11-k8s-goldmane--7c778bb748--qgjqk-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"c259a7b3-0c1e-4695-b558-e42d28fb4911", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 3, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-11", ContainerID:"1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a", Pod:"goldmane-7c778bb748-qgjqk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia9ba0776464", MAC:"ae:71:5f:35:bc:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:04:06.371731 containerd[1899]: 2025-11-05 16:04:06.347 [INFO][5286] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a" Namespace="calico-system" Pod="goldmane-7c778bb748-qgjqk" WorkloadEndpoint="ip--172--31--16--11-k8s-goldmane--7c778bb748--qgjqk-eth0" Nov 5 16:04:06.441257 containerd[1899]: time="2025-11-05T16:04:06.441204376Z" level=info msg="connecting to shim 1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a" address="unix:///run/containerd/s/7c7073e246638d345d2bd56d8a6a67a2ca32abd90cfff2da46bfbe46544df3dc" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:04:06.511354 systemd[1]: Started cri-containerd-1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a.scope - libcontainer container 1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a. Nov 5 16:04:06.578596 containerd[1899]: time="2025-11-05T16:04:06.578466473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-qgjqk,Uid:c259a7b3-0c1e-4695-b558-e42d28fb4911,Namespace:calico-system,Attempt:0,} returns sandbox id \"1fe989fc1b40db406a8f6466b3dc03980ce1ba7f786e603594f99d88b7934d2a\"" Nov 5 16:04:06.583443 containerd[1899]: time="2025-11-05T16:04:06.583395288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 16:04:06.866175 containerd[1899]: time="2025-11-05T16:04:06.866036885Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:06.868435 containerd[1899]: time="2025-11-05T16:04:06.868377045Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 16:04:06.868435 containerd[1899]: time="2025-11-05T16:04:06.868389345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 16:04:06.868718 kubelet[3198]: E1105 16:04:06.868654 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:04:06.868816 kubelet[3198]: E1105 16:04:06.868717 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:04:06.869184 kubelet[3198]: E1105 16:04:06.868813 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-qgjqk_calico-system(c259a7b3-0c1e-4695-b558-e42d28fb4911): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:06.869184 kubelet[3198]: E1105 16:04:06.868857 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qgjqk" podUID="c259a7b3-0c1e-4695-b558-e42d28fb4911" Nov 5 16:04:07.008749 containerd[1899]: time="2025-11-05T16:04:07.008708062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vw92k,Uid:a38961a6-3ae9-4766-af33-07fe9a74faa6,Namespace:kube-system,Attempt:0,}" Nov 5 16:04:07.169587 systemd-networkd[1473]: cali74d50f73832: Link UP Nov 5 16:04:07.172290 systemd-networkd[1473]: cali74d50f73832: Gained carrier Nov 5 16:04:07.202938 containerd[1899]: 2025-11-05 16:04:07.077 [INFO][5372] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--11-k8s-coredns--66bc5c9577--vw92k-eth0 coredns-66bc5c9577- kube-system a38961a6-3ae9-4766-af33-07fe9a74faa6 861 0 2025-11-05 16:02:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-11 coredns-66bc5c9577-vw92k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali74d50f73832 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c" Namespace="kube-system" Pod="coredns-66bc5c9577-vw92k" WorkloadEndpoint="ip--172--31--16--11-k8s-coredns--66bc5c9577--vw92k-" Nov 5 16:04:07.202938 containerd[1899]: 2025-11-05 16:04:07.077 [INFO][5372] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c" Namespace="kube-system" Pod="coredns-66bc5c9577-vw92k" WorkloadEndpoint="ip--172--31--16--11-k8s-coredns--66bc5c9577--vw92k-eth0" Nov 5 16:04:07.202938 containerd[1899]: 2025-11-05 16:04:07.114 [INFO][5383] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c" HandleID="k8s-pod-network.2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c" Workload="ip--172--31--16--11-k8s-coredns--66bc5c9577--vw92k-eth0" Nov 5 16:04:07.202938 containerd[1899]: 2025-11-05 16:04:07.115 [INFO][5383] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c" HandleID="k8s-pod-network.2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c" Workload="ip--172--31--16--11-k8s-coredns--66bc5c9577--vw92k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd820), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-11", "pod":"coredns-66bc5c9577-vw92k", "timestamp":"2025-11-05 16:04:07.114812354 +0000 UTC"}, Hostname:"ip-172-31-16-11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 16:04:07.202938 containerd[1899]: 2025-11-05 16:04:07.115 [INFO][5383] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 16:04:07.202938 containerd[1899]: 2025-11-05 16:04:07.115 [INFO][5383] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 16:04:07.202938 containerd[1899]: 2025-11-05 16:04:07.115 [INFO][5383] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-11' Nov 5 16:04:07.202938 containerd[1899]: 2025-11-05 16:04:07.125 [INFO][5383] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c" host="ip-172-31-16-11" Nov 5 16:04:07.202938 containerd[1899]: 2025-11-05 16:04:07.132 [INFO][5383] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-11" Nov 5 16:04:07.202938 containerd[1899]: 2025-11-05 16:04:07.138 [INFO][5383] ipam/ipam.go 511: Trying affinity for 192.168.109.0/26 host="ip-172-31-16-11" Nov 5 16:04:07.202938 containerd[1899]: 2025-11-05 16:04:07.141 [INFO][5383] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.0/26 host="ip-172-31-16-11" Nov 5 16:04:07.202938 containerd[1899]: 2025-11-05 16:04:07.144 [INFO][5383] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.0/26 host="ip-172-31-16-11" Nov 5 16:04:07.202938 containerd[1899]: 2025-11-05 16:04:07.144 [INFO][5383] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.109.0/26 handle="k8s-pod-network.2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c" host="ip-172-31-16-11" Nov 5 16:04:07.202938 containerd[1899]: 2025-11-05 16:04:07.146 [INFO][5383] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c Nov 5 16:04:07.202938 containerd[1899]: 2025-11-05 16:04:07.151 [INFO][5383] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.109.0/26 handle="k8s-pod-network.2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c" host="ip-172-31-16-11" Nov 5 16:04:07.202938 containerd[1899]: 2025-11-05 16:04:07.161 [INFO][5383] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.109.8/26] block=192.168.109.0/26 handle="k8s-pod-network.2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c" host="ip-172-31-16-11" Nov 5 16:04:07.202938 containerd[1899]: 2025-11-05 16:04:07.161 [INFO][5383] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.8/26] handle="k8s-pod-network.2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c" host="ip-172-31-16-11" Nov 5 16:04:07.202938 containerd[1899]: 2025-11-05 16:04:07.161 [INFO][5383] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 16:04:07.202938 containerd[1899]: 2025-11-05 16:04:07.161 [INFO][5383] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.109.8/26] IPv6=[] ContainerID="2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c" HandleID="k8s-pod-network.2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c" Workload="ip--172--31--16--11-k8s-coredns--66bc5c9577--vw92k-eth0" Nov 5 16:04:07.208470 containerd[1899]: 2025-11-05 16:04:07.165 [INFO][5372] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c" Namespace="kube-system" Pod="coredns-66bc5c9577-vw92k" WorkloadEndpoint="ip--172--31--16--11-k8s-coredns--66bc5c9577--vw92k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--11-k8s-coredns--66bc5c9577--vw92k-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a38961a6-3ae9-4766-af33-07fe9a74faa6", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 2, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-11", ContainerID:"", Pod:"coredns-66bc5c9577-vw92k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali74d50f73832", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:04:07.208470 containerd[1899]: 2025-11-05 16:04:07.165 [INFO][5372] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.8/32] ContainerID="2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c" Namespace="kube-system" Pod="coredns-66bc5c9577-vw92k" WorkloadEndpoint="ip--172--31--16--11-k8s-coredns--66bc5c9577--vw92k-eth0" Nov 5 16:04:07.208470 containerd[1899]: 2025-11-05 16:04:07.165 [INFO][5372] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74d50f73832 ContainerID="2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c" Namespace="kube-system" Pod="coredns-66bc5c9577-vw92k" WorkloadEndpoint="ip--172--31--16--11-k8s-coredns--66bc5c9577--vw92k-eth0" Nov 5 16:04:07.208470 containerd[1899]: 2025-11-05 16:04:07.173 [INFO][5372] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c" Namespace="kube-system" Pod="coredns-66bc5c9577-vw92k" WorkloadEndpoint="ip--172--31--16--11-k8s-coredns--66bc5c9577--vw92k-eth0" Nov 5 16:04:07.208470 containerd[1899]: 2025-11-05 16:04:07.175 [INFO][5372] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c" Namespace="kube-system" Pod="coredns-66bc5c9577-vw92k" WorkloadEndpoint="ip--172--31--16--11-k8s-coredns--66bc5c9577--vw92k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--11-k8s-coredns--66bc5c9577--vw92k-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a38961a6-3ae9-4766-af33-07fe9a74faa6", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 16, 2, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-11", ContainerID:"2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c", Pod:"coredns-66bc5c9577-vw92k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali74d50f73832", MAC:"02:e9:7e:4d:33:83", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 16:04:07.208470 containerd[1899]: 2025-11-05 16:04:07.198 [INFO][5372] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c" Namespace="kube-system" Pod="coredns-66bc5c9577-vw92k" WorkloadEndpoint="ip--172--31--16--11-k8s-coredns--66bc5c9577--vw92k-eth0" Nov 5 16:04:07.256158 containerd[1899]: time="2025-11-05T16:04:07.256052004Z" level=info msg="connecting to shim 2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c" address="unix:///run/containerd/s/b11714c94719e730174f3484e7e3160f8ed24856b6ae7a883857961f4354a782" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:04:07.293543 systemd[1]: Started cri-containerd-2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c.scope - libcontainer container 2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c. Nov 5 16:04:07.358752 containerd[1899]: time="2025-11-05T16:04:07.358695391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vw92k,Uid:a38961a6-3ae9-4766-af33-07fe9a74faa6,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c\"" Nov 5 16:04:07.364195 systemd-networkd[1473]: calia9ba0776464: Gained IPv6LL Nov 5 16:04:07.397367 containerd[1899]: time="2025-11-05T16:04:07.397326429Z" level=info msg="CreateContainer within sandbox \"2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 16:04:07.419925 containerd[1899]: time="2025-11-05T16:04:07.419733261Z" level=info msg="Container a1744c5561be3568074c29bba9d4b33b15c0ed44b83b1dbe8eea2b812050a8d9: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:04:07.432661 containerd[1899]: time="2025-11-05T16:04:07.432501518Z" level=info msg="CreateContainer within sandbox \"2a44e0252a65f720570c35d8bea46055a57cb59ff7e0b4ba7e06b1c397fb084c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a1744c5561be3568074c29bba9d4b33b15c0ed44b83b1dbe8eea2b812050a8d9\"" Nov 5 16:04:07.436110 containerd[1899]: time="2025-11-05T16:04:07.434916376Z" level=info msg="StartContainer for \"a1744c5561be3568074c29bba9d4b33b15c0ed44b83b1dbe8eea2b812050a8d9\"" Nov 5 16:04:07.436110 containerd[1899]: time="2025-11-05T16:04:07.435893471Z" level=info msg="connecting to shim a1744c5561be3568074c29bba9d4b33b15c0ed44b83b1dbe8eea2b812050a8d9" address="unix:///run/containerd/s/b11714c94719e730174f3484e7e3160f8ed24856b6ae7a883857961f4354a782" protocol=ttrpc version=3 Nov 5 16:04:07.462246 systemd[1]: Started cri-containerd-a1744c5561be3568074c29bba9d4b33b15c0ed44b83b1dbe8eea2b812050a8d9.scope - libcontainer container a1744c5561be3568074c29bba9d4b33b15c0ed44b83b1dbe8eea2b812050a8d9. Nov 5 16:04:07.508285 containerd[1899]: time="2025-11-05T16:04:07.508178679Z" level=info msg="StartContainer for \"a1744c5561be3568074c29bba9d4b33b15c0ed44b83b1dbe8eea2b812050a8d9\" returns successfully" Nov 5 16:04:07.636723 kubelet[3198]: E1105 16:04:07.636668 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qgjqk" podUID="c259a7b3-0c1e-4695-b558-e42d28fb4911" Nov 5 16:04:07.672661 kubelet[3198]: I1105 16:04:07.672329 3198 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vw92k" podStartSLOduration=89.672306337 podStartE2EDuration="1m29.672306337s" podCreationTimestamp="2025-11-05 16:02:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:04:07.67145803 +0000 UTC m=+95.857513823" watchObservedRunningTime="2025-11-05 16:04:07.672306337 +0000 UTC m=+95.858362131" Nov 5 16:04:08.961218 systemd-networkd[1473]: cali74d50f73832: Gained IPv6LL Nov 5 16:04:11.242878 systemd[1]: Started sshd@8-172.31.16.11:22-139.178.68.195:57962.service - OpenSSH per-connection server daemon (139.178.68.195:57962). Nov 5 16:04:11.471002 sshd[5486]: Accepted publickey for core from 139.178.68.195 port 57962 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:11.473252 sshd-session[5486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:11.480466 systemd-logind[1855]: New session 9 of user core. Nov 5 16:04:11.490328 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 16:04:11.880247 sshd[5489]: Connection closed by 139.178.68.195 port 57962 Nov 5 16:04:11.882713 sshd-session[5486]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:11.890139 systemd[1]: sshd@8-172.31.16.11:22-139.178.68.195:57962.service: Deactivated successfully. Nov 5 16:04:11.892825 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 16:04:11.894026 systemd-logind[1855]: Session 9 logged out. Waiting for processes to exit. Nov 5 16:04:11.896832 systemd-logind[1855]: Removed session 9. Nov 5 16:04:11.938924 ntpd[1844]: Listen normally on 13 cali178a46e49ad [fe80::ecee:eeff:feee:eeee%12]:123 Nov 5 16:04:11.939460 ntpd[1844]: Listen normally on 14 calia9ba0776464 [fe80::ecee:eeff:feee:eeee%13]:123 Nov 5 16:04:11.940756 ntpd[1844]: 5 Nov 16:04:11 ntpd[1844]: Listen normally on 13 cali178a46e49ad [fe80::ecee:eeff:feee:eeee%12]:123 Nov 5 16:04:11.940756 ntpd[1844]: 5 Nov 16:04:11 ntpd[1844]: Listen normally on 14 calia9ba0776464 [fe80::ecee:eeff:feee:eeee%13]:123 Nov 5 16:04:11.940756 ntpd[1844]: 5 Nov 16:04:11 ntpd[1844]: Listen normally on 15 cali74d50f73832 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 5 16:04:11.939503 ntpd[1844]: Listen normally on 15 cali74d50f73832 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 5 16:04:12.010081 containerd[1899]: time="2025-11-05T16:04:12.009881016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 16:04:12.326566 containerd[1899]: time="2025-11-05T16:04:12.326421150Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:12.329210 containerd[1899]: time="2025-11-05T16:04:12.329083503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 16:04:12.329210 containerd[1899]: time="2025-11-05T16:04:12.329136430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 16:04:12.329628 kubelet[3198]: E1105 16:04:12.329559 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:04:12.329628 kubelet[3198]: E1105 16:04:12.329607 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:04:12.330089 kubelet[3198]: E1105 16:04:12.329785 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-c75ccf967-dqkw4_calico-system(34aa5cb5-d018-431d-960a-4659dc21c0b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:12.331101 containerd[1899]: time="2025-11-05T16:04:12.331007255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 16:04:12.625153 containerd[1899]: time="2025-11-05T16:04:12.625023615Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:12.628221 containerd[1899]: time="2025-11-05T16:04:12.628079052Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 16:04:12.628221 containerd[1899]: time="2025-11-05T16:04:12.628186342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 16:04:12.628671 kubelet[3198]: E1105 16:04:12.628630 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:04:12.629031 kubelet[3198]: E1105 16:04:12.628679 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:04:12.629031 kubelet[3198]: E1105 16:04:12.628867 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5f9c9c664f-fhtxd_calico-system(5b76ecda-67c8-4ccb-b2a9-6e4178612c50): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:12.629031 kubelet[3198]: E1105 16:04:12.628917 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f9c9c664f-fhtxd" podUID="5b76ecda-67c8-4ccb-b2a9-6e4178612c50" Nov 5 16:04:12.630671 containerd[1899]: time="2025-11-05T16:04:12.630625691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 16:04:12.939852 containerd[1899]: time="2025-11-05T16:04:12.939788925Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:12.942096 containerd[1899]: time="2025-11-05T16:04:12.941957447Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 16:04:12.942261 containerd[1899]: time="2025-11-05T16:04:12.942103281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 16:04:12.942369 kubelet[3198]: E1105 16:04:12.942319 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:04:12.942448 kubelet[3198]: E1105 16:04:12.942377 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:04:12.943064 kubelet[3198]: E1105 16:04:12.942465 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-c75ccf967-dqkw4_calico-system(34aa5cb5-d018-431d-960a-4659dc21c0b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:12.943064 kubelet[3198]: E1105 16:04:12.942517 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c75ccf967-dqkw4" podUID="34aa5cb5-d018-431d-960a-4659dc21c0b7" Nov 5 16:04:13.005137 containerd[1899]: time="2025-11-05T16:04:13.005073146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 16:04:13.276677 containerd[1899]: time="2025-11-05T16:04:13.276549391Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:13.278765 containerd[1899]: time="2025-11-05T16:04:13.278700077Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 16:04:13.278913 containerd[1899]: time="2025-11-05T16:04:13.278802487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 16:04:13.279068 kubelet[3198]: E1105 16:04:13.279028 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:04:13.279331 kubelet[3198]: E1105 16:04:13.279076 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:04:13.279413 kubelet[3198]: E1105 16:04:13.279384 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7k4x5_calico-system(d0a5c89c-b602-442e-811b-c3720b9add41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:13.281452 containerd[1899]: time="2025-11-05T16:04:13.281408949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 16:04:13.557710 containerd[1899]: time="2025-11-05T16:04:13.557587191Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:13.560913 containerd[1899]: time="2025-11-05T16:04:13.560841446Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 16:04:13.561103 containerd[1899]: time="2025-11-05T16:04:13.560960661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 16:04:13.561211 kubelet[3198]: E1105 16:04:13.561166 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:04:13.561759 kubelet[3198]: E1105 16:04:13.561222 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:04:13.562028 kubelet[3198]: E1105 16:04:13.561822 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7k4x5_calico-system(d0a5c89c-b602-442e-811b-c3720b9add41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:13.562028 kubelet[3198]: E1105 16:04:13.561893 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7k4x5" podUID="d0a5c89c-b602-442e-811b-c3720b9add41" Nov 5 16:04:15.005620 containerd[1899]: time="2025-11-05T16:04:15.005573281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:04:15.346375 containerd[1899]: time="2025-11-05T16:04:15.346226246Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:15.348756 containerd[1899]: time="2025-11-05T16:04:15.348579551Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:04:15.348756 containerd[1899]: time="2025-11-05T16:04:15.348698014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:04:15.349033 kubelet[3198]: E1105 16:04:15.348939 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:04:15.351528 kubelet[3198]: E1105 16:04:15.349065 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:04:15.351528 kubelet[3198]: E1105 16:04:15.349433 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8fffdb464-mjcqs_calico-apiserver(96835183-cb2e-4158-994a-2b18537288b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:15.351528 kubelet[3198]: E1105 16:04:15.349508 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fffdb464-mjcqs" podUID="96835183-cb2e-4158-994a-2b18537288b4" Nov 5 16:04:15.351896 containerd[1899]: time="2025-11-05T16:04:15.350614072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:04:15.619783 containerd[1899]: time="2025-11-05T16:04:15.619547221Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:15.621771 containerd[1899]: time="2025-11-05T16:04:15.621711493Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:04:15.621896 containerd[1899]: time="2025-11-05T16:04:15.621835195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:04:15.623742 kubelet[3198]: E1105 16:04:15.623230 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:04:15.623742 kubelet[3198]: E1105 16:04:15.623293 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:04:15.623742 kubelet[3198]: E1105 16:04:15.623405 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8fffdb464-q5zql_calico-apiserver(436b2852-bb09-4690-8210-c17e2fe57e96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:15.623742 kubelet[3198]: E1105 16:04:15.623463 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fffdb464-q5zql" podUID="436b2852-bb09-4690-8210-c17e2fe57e96" Nov 5 16:04:16.922929 systemd[1]: Started sshd@9-172.31.16.11:22-139.178.68.195:58294.service - OpenSSH per-connection server daemon (139.178.68.195:58294). Nov 5 16:04:17.154020 sshd[5516]: Accepted publickey for core from 139.178.68.195 port 58294 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:17.157134 sshd-session[5516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:17.165747 systemd-logind[1855]: New session 10 of user core. Nov 5 16:04:17.170186 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 16:04:17.387460 sshd[5519]: Connection closed by 139.178.68.195 port 58294 Nov 5 16:04:17.425733 sshd-session[5516]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:17.428582 systemd[1]: Started sshd@10-172.31.16.11:22-139.178.68.195:58310.service - OpenSSH per-connection server daemon (139.178.68.195:58310). Nov 5 16:04:17.435885 systemd[1]: sshd@9-172.31.16.11:22-139.178.68.195:58294.service: Deactivated successfully. Nov 5 16:04:17.439401 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 16:04:17.447196 systemd-logind[1855]: Session 10 logged out. Waiting for processes to exit. Nov 5 16:04:17.450639 systemd-logind[1855]: Removed session 10. Nov 5 16:04:17.620632 sshd[5531]: Accepted publickey for core from 139.178.68.195 port 58310 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:17.622561 sshd-session[5531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:17.632021 systemd-logind[1855]: New session 11 of user core. Nov 5 16:04:17.638263 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 16:04:17.963093 sshd[5537]: Connection closed by 139.178.68.195 port 58310 Nov 5 16:04:17.965614 sshd-session[5531]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:17.981031 systemd[1]: sshd@10-172.31.16.11:22-139.178.68.195:58310.service: Deactivated successfully. Nov 5 16:04:17.986391 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 16:04:17.991012 systemd-logind[1855]: Session 11 logged out. Waiting for processes to exit. Nov 5 16:04:18.012213 systemd[1]: Started sshd@11-172.31.16.11:22-139.178.68.195:58316.service - OpenSSH per-connection server daemon (139.178.68.195:58316). Nov 5 16:04:18.019034 systemd-logind[1855]: Removed session 11. Nov 5 16:04:18.261259 sshd[5547]: Accepted publickey for core from 139.178.68.195 port 58316 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:18.265348 sshd-session[5547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:18.276959 systemd-logind[1855]: New session 12 of user core. Nov 5 16:04:18.283668 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 16:04:18.579349 sshd[5550]: Connection closed by 139.178.68.195 port 58316 Nov 5 16:04:18.581885 sshd-session[5547]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:18.591467 systemd[1]: sshd@11-172.31.16.11:22-139.178.68.195:58316.service: Deactivated successfully. Nov 5 16:04:18.595868 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 16:04:18.598079 systemd-logind[1855]: Session 12 logged out. Waiting for processes to exit. Nov 5 16:04:18.600710 systemd-logind[1855]: Removed session 12. Nov 5 16:04:22.006166 containerd[1899]: time="2025-11-05T16:04:22.006102287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 16:04:22.295244 containerd[1899]: time="2025-11-05T16:04:22.294911113Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:22.297735 containerd[1899]: time="2025-11-05T16:04:22.297673990Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 16:04:22.297889 containerd[1899]: time="2025-11-05T16:04:22.297783867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 16:04:22.298176 kubelet[3198]: E1105 16:04:22.298134 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:04:22.298627 kubelet[3198]: E1105 16:04:22.298188 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:04:22.298627 kubelet[3198]: E1105 16:04:22.298371 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-qgjqk_calico-system(c259a7b3-0c1e-4695-b558-e42d28fb4911): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:22.298627 kubelet[3198]: E1105 16:04:22.298418 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qgjqk" podUID="c259a7b3-0c1e-4695-b558-e42d28fb4911" Nov 5 16:04:22.299777 containerd[1899]: time="2025-11-05T16:04:22.299735387Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc202849b093544ceac99fd4d85bc89166703102cfcef98e6208384d93753469\" id:\"468b4836e001a89134edec7cddcb5dafdfb08d48538f519a7f021d61154b96da\" pid:5575 exited_at:{seconds:1762358662 nanos:297506127}" Nov 5 16:04:23.620121 systemd[1]: Started sshd@12-172.31.16.11:22-139.178.68.195:50056.service - OpenSSH per-connection server daemon (139.178.68.195:50056). Nov 5 16:04:23.821683 sshd[5591]: Accepted publickey for core from 139.178.68.195 port 50056 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:23.822663 sshd-session[5591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:23.835202 systemd-logind[1855]: New session 13 of user core. Nov 5 16:04:23.842234 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 16:04:24.155801 sshd[5594]: Connection closed by 139.178.68.195 port 50056 Nov 5 16:04:24.156704 sshd-session[5591]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:24.162868 systemd[1]: sshd@12-172.31.16.11:22-139.178.68.195:50056.service: Deactivated successfully. Nov 5 16:04:24.166124 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 16:04:24.167564 systemd-logind[1855]: Session 13 logged out. Waiting for processes to exit. Nov 5 16:04:24.169377 systemd-logind[1855]: Removed session 13. Nov 5 16:04:25.004312 kubelet[3198]: E1105 16:04:25.004252 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c75ccf967-dqkw4" podUID="34aa5cb5-d018-431d-960a-4659dc21c0b7" Nov 5 16:04:26.007541 kubelet[3198]: E1105 16:04:26.007041 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f9c9c664f-fhtxd" podUID="5b76ecda-67c8-4ccb-b2a9-6e4178612c50" Nov 5 16:04:26.009179 kubelet[3198]: E1105 16:04:26.008885 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fffdb464-q5zql" podUID="436b2852-bb09-4690-8210-c17e2fe57e96" Nov 5 16:04:26.009638 kubelet[3198]: E1105 16:04:26.009442 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7k4x5" podUID="d0a5c89c-b602-442e-811b-c3720b9add41" Nov 5 16:04:29.192689 systemd[1]: Started sshd@13-172.31.16.11:22-139.178.68.195:50066.service - OpenSSH per-connection server daemon (139.178.68.195:50066). Nov 5 16:04:29.457468 sshd[5610]: Accepted publickey for core from 139.178.68.195 port 50066 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:29.464452 sshd-session[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:29.472802 systemd-logind[1855]: New session 14 of user core. Nov 5 16:04:29.483865 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 16:04:29.898195 sshd[5624]: Connection closed by 139.178.68.195 port 50066 Nov 5 16:04:29.899295 sshd-session[5610]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:29.905232 systemd[1]: sshd@13-172.31.16.11:22-139.178.68.195:50066.service: Deactivated successfully. Nov 5 16:04:29.907748 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 16:04:29.909136 systemd-logind[1855]: Session 14 logged out. Waiting for processes to exit. Nov 5 16:04:29.910933 systemd-logind[1855]: Removed session 14. Nov 5 16:04:30.011726 kubelet[3198]: E1105 16:04:30.011639 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fffdb464-mjcqs" podUID="96835183-cb2e-4158-994a-2b18537288b4" Nov 5 16:04:34.938599 systemd[1]: Started sshd@14-172.31.16.11:22-139.178.68.195:53030.service - OpenSSH per-connection server daemon (139.178.68.195:53030). Nov 5 16:04:35.126691 sshd[5638]: Accepted publickey for core from 139.178.68.195 port 53030 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:35.128526 sshd-session[5638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:35.136704 systemd-logind[1855]: New session 15 of user core. Nov 5 16:04:35.141248 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 16:04:35.349246 sshd[5641]: Connection closed by 139.178.68.195 port 53030 Nov 5 16:04:35.350227 sshd-session[5638]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:35.356491 systemd[1]: sshd@14-172.31.16.11:22-139.178.68.195:53030.service: Deactivated successfully. Nov 5 16:04:35.359396 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 16:04:35.361550 systemd-logind[1855]: Session 15 logged out. Waiting for processes to exit. Nov 5 16:04:35.363785 systemd-logind[1855]: Removed session 15. Nov 5 16:04:36.006131 kubelet[3198]: E1105 16:04:36.005484 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qgjqk" podUID="c259a7b3-0c1e-4695-b558-e42d28fb4911" Nov 5 16:04:39.006813 containerd[1899]: time="2025-11-05T16:04:39.006766253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 16:04:39.294572 containerd[1899]: time="2025-11-05T16:04:39.294443143Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:39.296628 containerd[1899]: time="2025-11-05T16:04:39.296551817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 16:04:39.296628 containerd[1899]: time="2025-11-05T16:04:39.296552029Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 16:04:39.297049 kubelet[3198]: E1105 16:04:39.296952 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:04:39.297454 kubelet[3198]: E1105 16:04:39.297051 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:04:39.297454 kubelet[3198]: E1105 16:04:39.297372 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7k4x5_calico-system(d0a5c89c-b602-442e-811b-c3720b9add41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:39.298366 containerd[1899]: time="2025-11-05T16:04:39.298333779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:04:39.586255 containerd[1899]: time="2025-11-05T16:04:39.586102090Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:39.588637 containerd[1899]: time="2025-11-05T16:04:39.588499280Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:04:39.588637 containerd[1899]: time="2025-11-05T16:04:39.588614887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:04:39.589093 kubelet[3198]: E1105 16:04:39.589039 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:04:39.589093 kubelet[3198]: E1105 16:04:39.589091 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:04:39.589434 kubelet[3198]: E1105 16:04:39.589355 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8fffdb464-q5zql_calico-apiserver(436b2852-bb09-4690-8210-c17e2fe57e96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:39.589434 kubelet[3198]: E1105 16:04:39.589419 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fffdb464-q5zql" podUID="436b2852-bb09-4690-8210-c17e2fe57e96" Nov 5 16:04:39.590294 containerd[1899]: time="2025-11-05T16:04:39.590264541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 16:04:39.911802 containerd[1899]: time="2025-11-05T16:04:39.911724156Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:39.914274 containerd[1899]: time="2025-11-05T16:04:39.914023993Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 16:04:39.914274 containerd[1899]: time="2025-11-05T16:04:39.914106408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 16:04:39.914517 kubelet[3198]: E1105 16:04:39.914449 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:04:39.914517 kubelet[3198]: E1105 16:04:39.914505 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:04:39.914630 kubelet[3198]: E1105 16:04:39.914593 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7k4x5_calico-system(d0a5c89c-b602-442e-811b-c3720b9add41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:39.914714 kubelet[3198]: E1105 16:04:39.914648 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7k4x5" podUID="d0a5c89c-b602-442e-811b-c3720b9add41" Nov 5 16:04:40.005461 containerd[1899]: time="2025-11-05T16:04:40.005418723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 16:04:40.289346 containerd[1899]: time="2025-11-05T16:04:40.289158254Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:40.291530 containerd[1899]: time="2025-11-05T16:04:40.291330359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 16:04:40.291530 containerd[1899]: time="2025-11-05T16:04:40.291401084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 16:04:40.292011 kubelet[3198]: E1105 16:04:40.291858 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:04:40.292234 kubelet[3198]: E1105 16:04:40.292029 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:04:40.292234 kubelet[3198]: E1105 16:04:40.292138 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-c75ccf967-dqkw4_calico-system(34aa5cb5-d018-431d-960a-4659dc21c0b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:40.293914 containerd[1899]: time="2025-11-05T16:04:40.293135099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 16:04:40.395448 systemd[1]: Started sshd@15-172.31.16.11:22-139.178.68.195:53042.service - OpenSSH per-connection server daemon (139.178.68.195:53042). Nov 5 16:04:40.589133 sshd[5660]: Accepted publickey for core from 139.178.68.195 port 53042 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:40.590577 sshd-session[5660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:40.597650 systemd-logind[1855]: New session 16 of user core. Nov 5 16:04:40.603497 containerd[1899]: time="2025-11-05T16:04:40.603388052Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:40.604227 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 16:04:40.605670 containerd[1899]: time="2025-11-05T16:04:40.605627520Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 16:04:40.605739 containerd[1899]: time="2025-11-05T16:04:40.605721298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 16:04:40.606027 kubelet[3198]: E1105 16:04:40.605990 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:04:40.608316 kubelet[3198]: E1105 16:04:40.606036 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:04:40.608316 kubelet[3198]: E1105 16:04:40.606142 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-c75ccf967-dqkw4_calico-system(34aa5cb5-d018-431d-960a-4659dc21c0b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:40.608316 kubelet[3198]: E1105 16:04:40.606516 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c75ccf967-dqkw4" podUID="34aa5cb5-d018-431d-960a-4659dc21c0b7" Nov 5 16:04:40.857585 sshd[5663]: Connection closed by 139.178.68.195 port 53042 Nov 5 16:04:40.897853 systemd[1]: Started sshd@16-172.31.16.11:22-139.178.68.195:53046.service - OpenSSH per-connection server daemon (139.178.68.195:53046). Nov 5 16:04:40.923359 sshd-session[5660]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:40.931332 systemd[1]: sshd@15-172.31.16.11:22-139.178.68.195:53042.service: Deactivated successfully. Nov 5 16:04:40.938915 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 16:04:40.943631 systemd-logind[1855]: Session 16 logged out. Waiting for processes to exit. Nov 5 16:04:40.946165 systemd-logind[1855]: Removed session 16. Nov 5 16:04:41.005249 containerd[1899]: time="2025-11-05T16:04:41.005065196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 16:04:41.116058 sshd[5672]: Accepted publickey for core from 139.178.68.195 port 53046 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:41.122091 sshd-session[5672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:41.147392 systemd-logind[1855]: New session 17 of user core. Nov 5 16:04:41.160272 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 16:04:41.267616 containerd[1899]: time="2025-11-05T16:04:41.266505634Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:41.268717 containerd[1899]: time="2025-11-05T16:04:41.268652643Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 16:04:41.268717 containerd[1899]: time="2025-11-05T16:04:41.268669649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 16:04:41.269189 kubelet[3198]: E1105 16:04:41.269146 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:04:41.269325 kubelet[3198]: E1105 16:04:41.269194 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:04:41.269325 kubelet[3198]: E1105 16:04:41.269271 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5f9c9c664f-fhtxd_calico-system(5b76ecda-67c8-4ccb-b2a9-6e4178612c50): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:41.269325 kubelet[3198]: E1105 16:04:41.269308 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f9c9c664f-fhtxd" podUID="5b76ecda-67c8-4ccb-b2a9-6e4178612c50" Nov 5 16:04:44.017442 containerd[1899]: time="2025-11-05T16:04:44.017390477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:04:44.414956 containerd[1899]: time="2025-11-05T16:04:44.414893090Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:44.417521 containerd[1899]: time="2025-11-05T16:04:44.417389931Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:04:44.417521 containerd[1899]: time="2025-11-05T16:04:44.417450578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:04:44.417699 kubelet[3198]: E1105 16:04:44.417662 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:04:44.418726 kubelet[3198]: E1105 16:04:44.417707 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:04:44.418726 kubelet[3198]: E1105 16:04:44.417782 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8fffdb464-mjcqs_calico-apiserver(96835183-cb2e-4158-994a-2b18537288b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:44.418726 kubelet[3198]: E1105 16:04:44.417813 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fffdb464-mjcqs" podUID="96835183-cb2e-4158-994a-2b18537288b4" Nov 5 16:04:44.911270 sshd[5678]: Connection closed by 139.178.68.195 port 53046 Nov 5 16:04:44.914126 sshd-session[5672]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:44.922335 systemd[1]: sshd@16-172.31.16.11:22-139.178.68.195:53046.service: Deactivated successfully. Nov 5 16:04:44.924774 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 16:04:44.925923 systemd-logind[1855]: Session 17 logged out. Waiting for processes to exit. Nov 5 16:04:44.927796 systemd-logind[1855]: Removed session 17. Nov 5 16:04:44.941908 systemd[1]: Started sshd@17-172.31.16.11:22-139.178.68.195:53218.service - OpenSSH per-connection server daemon (139.178.68.195:53218). Nov 5 16:04:45.149514 sshd[5691]: Accepted publickey for core from 139.178.68.195 port 53218 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:45.152210 sshd-session[5691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:45.158604 systemd-logind[1855]: New session 18 of user core. Nov 5 16:04:45.170249 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 16:04:46.397739 sshd[5696]: Connection closed by 139.178.68.195 port 53218 Nov 5 16:04:46.398477 sshd-session[5691]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:46.405595 systemd[1]: sshd@17-172.31.16.11:22-139.178.68.195:53218.service: Deactivated successfully. Nov 5 16:04:46.408889 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 16:04:46.411426 systemd-logind[1855]: Session 18 logged out. Waiting for processes to exit. Nov 5 16:04:46.412721 systemd-logind[1855]: Removed session 18. Nov 5 16:04:46.432030 systemd[1]: Started sshd@18-172.31.16.11:22-139.178.68.195:53234.service - OpenSSH per-connection server daemon (139.178.68.195:53234). Nov 5 16:04:46.650212 sshd[5712]: Accepted publickey for core from 139.178.68.195 port 53234 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:46.656305 sshd-session[5712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:46.666405 systemd-logind[1855]: New session 19 of user core. Nov 5 16:04:46.674254 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 16:04:47.007905 containerd[1899]: time="2025-11-05T16:04:47.006601896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 16:04:47.319322 sshd[5715]: Connection closed by 139.178.68.195 port 53234 Nov 5 16:04:47.320272 sshd-session[5712]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:47.328606 systemd[1]: sshd@18-172.31.16.11:22-139.178.68.195:53234.service: Deactivated successfully. Nov 5 16:04:47.333574 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 16:04:47.336124 systemd-logind[1855]: Session 19 logged out. Waiting for processes to exit. Nov 5 16:04:47.339917 systemd-logind[1855]: Removed session 19. Nov 5 16:04:47.374091 containerd[1899]: time="2025-11-05T16:04:47.373151674Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:04:47.378337 containerd[1899]: time="2025-11-05T16:04:47.377839613Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 16:04:47.378337 containerd[1899]: time="2025-11-05T16:04:47.378000076Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 16:04:47.378551 kubelet[3198]: E1105 16:04:47.378370 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:04:47.378551 kubelet[3198]: E1105 16:04:47.378428 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:04:47.378551 kubelet[3198]: E1105 16:04:47.378533 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-qgjqk_calico-system(c259a7b3-0c1e-4695-b558-e42d28fb4911): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 16:04:47.383759 kubelet[3198]: E1105 16:04:47.378575 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qgjqk" podUID="c259a7b3-0c1e-4695-b558-e42d28fb4911" Nov 5 16:04:47.382808 systemd[1]: Started sshd@19-172.31.16.11:22-139.178.68.195:53244.service - OpenSSH per-connection server daemon (139.178.68.195:53244). Nov 5 16:04:47.624035 sshd[5727]: Accepted publickey for core from 139.178.68.195 port 53244 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:47.626037 sshd-session[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:47.632582 systemd-logind[1855]: New session 20 of user core. Nov 5 16:04:47.636152 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 16:04:47.876096 sshd[5732]: Connection closed by 139.178.68.195 port 53244 Nov 5 16:04:47.876675 sshd-session[5727]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:47.881073 systemd[1]: sshd@19-172.31.16.11:22-139.178.68.195:53244.service: Deactivated successfully. Nov 5 16:04:47.883725 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 16:04:47.886499 systemd-logind[1855]: Session 20 logged out. Waiting for processes to exit. Nov 5 16:04:47.888541 systemd-logind[1855]: Removed session 20. Nov 5 16:04:51.004058 kubelet[3198]: E1105 16:04:51.003726 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fffdb464-q5zql" podUID="436b2852-bb09-4690-8210-c17e2fe57e96" Nov 5 16:04:52.195805 containerd[1899]: time="2025-11-05T16:04:52.195760963Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc202849b093544ceac99fd4d85bc89166703102cfcef98e6208384d93753469\" id:\"17e14aa3753d51738b27b459abf6a3bac703b6867c613697e4eb252dde30dfef\" pid:5759 exited_at:{seconds:1762358692 nanos:195398610}" Nov 5 16:04:52.923758 systemd[1]: Started sshd@20-172.31.16.11:22-139.178.68.195:53250.service - OpenSSH per-connection server daemon (139.178.68.195:53250). Nov 5 16:04:53.173574 sshd[5771]: Accepted publickey for core from 139.178.68.195 port 53250 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:53.200371 sshd-session[5771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:53.210383 systemd-logind[1855]: New session 21 of user core. Nov 5 16:04:53.214562 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 16:04:53.545534 sshd[5775]: Connection closed by 139.178.68.195 port 53250 Nov 5 16:04:53.546434 sshd-session[5771]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:53.551712 systemd[1]: sshd@20-172.31.16.11:22-139.178.68.195:53250.service: Deactivated successfully. Nov 5 16:04:53.554400 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 16:04:53.555850 systemd-logind[1855]: Session 21 logged out. Waiting for processes to exit. Nov 5 16:04:53.558176 systemd-logind[1855]: Removed session 21. Nov 5 16:04:54.010915 kubelet[3198]: E1105 16:04:54.010857 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f9c9c664f-fhtxd" podUID="5b76ecda-67c8-4ccb-b2a9-6e4178612c50" Nov 5 16:04:54.013023 kubelet[3198]: E1105 16:04:54.012906 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7k4x5" podUID="d0a5c89c-b602-442e-811b-c3720b9add41" Nov 5 16:04:56.009893 kubelet[3198]: E1105 16:04:56.009444 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fffdb464-mjcqs" podUID="96835183-cb2e-4158-994a-2b18537288b4" Nov 5 16:04:56.011894 kubelet[3198]: E1105 16:04:56.011501 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c75ccf967-dqkw4" podUID="34aa5cb5-d018-431d-960a-4659dc21c0b7" Nov 5 16:04:58.007412 kubelet[3198]: E1105 16:04:58.007305 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qgjqk" podUID="c259a7b3-0c1e-4695-b558-e42d28fb4911" Nov 5 16:04:58.585391 systemd[1]: Started sshd@21-172.31.16.11:22-139.178.68.195:32846.service - OpenSSH per-connection server daemon (139.178.68.195:32846). Nov 5 16:04:58.771180 sshd[5789]: Accepted publickey for core from 139.178.68.195 port 32846 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:04:58.774733 sshd-session[5789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:58.782042 systemd-logind[1855]: New session 22 of user core. Nov 5 16:04:58.793478 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 16:04:59.101012 sshd[5792]: Connection closed by 139.178.68.195 port 32846 Nov 5 16:04:59.101995 sshd-session[5789]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:59.112212 systemd[1]: sshd@21-172.31.16.11:22-139.178.68.195:32846.service: Deactivated successfully. Nov 5 16:04:59.118756 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 16:04:59.121951 systemd-logind[1855]: Session 22 logged out. Waiting for processes to exit. Nov 5 16:04:59.124575 systemd-logind[1855]: Removed session 22. Nov 5 16:05:03.007424 kubelet[3198]: E1105 16:05:03.007367 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fffdb464-q5zql" podUID="436b2852-bb09-4690-8210-c17e2fe57e96" Nov 5 16:05:04.144579 systemd[1]: Started sshd@22-172.31.16.11:22-139.178.68.195:57168.service - OpenSSH per-connection server daemon (139.178.68.195:57168). Nov 5 16:05:04.397706 sshd[5804]: Accepted publickey for core from 139.178.68.195 port 57168 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:05:04.400597 sshd-session[5804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:05:04.412472 systemd-logind[1855]: New session 23 of user core. Nov 5 16:05:04.418468 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 16:05:05.011292 kubelet[3198]: E1105 16:05:05.010014 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f9c9c664f-fhtxd" podUID="5b76ecda-67c8-4ccb-b2a9-6e4178612c50" Nov 5 16:05:05.131504 sshd[5807]: Connection closed by 139.178.68.195 port 57168 Nov 5 16:05:05.134017 sshd-session[5804]: pam_unix(sshd:session): session closed for user core Nov 5 16:05:05.144711 systemd[1]: sshd@22-172.31.16.11:22-139.178.68.195:57168.service: Deactivated successfully. Nov 5 16:05:05.148927 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 16:05:05.151181 systemd-logind[1855]: Session 23 logged out. Waiting for processes to exit. Nov 5 16:05:05.154198 systemd-logind[1855]: Removed session 23. Nov 5 16:05:07.009000 kubelet[3198]: E1105 16:05:07.008935 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c75ccf967-dqkw4" podUID="34aa5cb5-d018-431d-960a-4659dc21c0b7" Nov 5 16:05:07.010438 kubelet[3198]: E1105 16:05:07.010387 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7k4x5" podUID="d0a5c89c-b602-442e-811b-c3720b9add41" Nov 5 16:05:07.017324 kubelet[3198]: E1105 16:05:07.017284 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fffdb464-mjcqs" podUID="96835183-cb2e-4158-994a-2b18537288b4" Nov 5 16:05:09.005014 kubelet[3198]: E1105 16:05:09.004723 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qgjqk" podUID="c259a7b3-0c1e-4695-b558-e42d28fb4911" Nov 5 16:05:10.174406 systemd[1]: Started sshd@23-172.31.16.11:22-139.178.68.195:57182.service - OpenSSH per-connection server daemon (139.178.68.195:57182). Nov 5 16:05:10.413813 sshd[5819]: Accepted publickey for core from 139.178.68.195 port 57182 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:05:10.416013 sshd-session[5819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:05:10.426686 systemd-logind[1855]: New session 24 of user core. Nov 5 16:05:10.431311 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 16:05:10.808878 sshd[5822]: Connection closed by 139.178.68.195 port 57182 Nov 5 16:05:10.809161 sshd-session[5819]: pam_unix(sshd:session): session closed for user core Nov 5 16:05:10.816877 systemd[1]: sshd@23-172.31.16.11:22-139.178.68.195:57182.service: Deactivated successfully. Nov 5 16:05:10.821885 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 16:05:10.824593 systemd-logind[1855]: Session 24 logged out. Waiting for processes to exit. Nov 5 16:05:10.827343 systemd-logind[1855]: Removed session 24. Nov 5 16:05:14.005609 kubelet[3198]: E1105 16:05:14.005180 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fffdb464-q5zql" podUID="436b2852-bb09-4690-8210-c17e2fe57e96" Nov 5 16:05:15.845377 systemd[1]: Started sshd@24-172.31.16.11:22-139.178.68.195:39690.service - OpenSSH per-connection server daemon (139.178.68.195:39690). Nov 5 16:05:16.008000 kubelet[3198]: E1105 16:05:16.007277 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f9c9c664f-fhtxd" podUID="5b76ecda-67c8-4ccb-b2a9-6e4178612c50" Nov 5 16:05:16.105197 sshd[5835]: Accepted publickey for core from 139.178.68.195 port 39690 ssh2: RSA SHA256:lDTkkttfrdf0waMsUCrkt3PttT+f70EKKZ9M0wGKTjg Nov 5 16:05:16.110329 sshd-session[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:05:16.122065 systemd-logind[1855]: New session 25 of user core. Nov 5 16:05:16.129204 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 5 16:05:16.515002 sshd[5838]: Connection closed by 139.178.68.195 port 39690 Nov 5 16:05:16.515637 sshd-session[5835]: pam_unix(sshd:session): session closed for user core Nov 5 16:05:16.522338 systemd-logind[1855]: Session 25 logged out. Waiting for processes to exit. Nov 5 16:05:16.523862 systemd[1]: sshd@24-172.31.16.11:22-139.178.68.195:39690.service: Deactivated successfully. Nov 5 16:05:16.529092 systemd[1]: session-25.scope: Deactivated successfully. Nov 5 16:05:16.534885 systemd-logind[1855]: Removed session 25. Nov 5 16:05:18.007644 kubelet[3198]: E1105 16:05:18.006955 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fffdb464-mjcqs" podUID="96835183-cb2e-4158-994a-2b18537288b4" Nov 5 16:05:20.005069 kubelet[3198]: E1105 16:05:20.005019 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qgjqk" podUID="c259a7b3-0c1e-4695-b558-e42d28fb4911" Nov 5 16:05:21.004247 containerd[1899]: time="2025-11-05T16:05:21.004181732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 16:05:21.359572 containerd[1899]: time="2025-11-05T16:05:21.359275826Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:21.361610 containerd[1899]: time="2025-11-05T16:05:21.361419324Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 16:05:21.362075 containerd[1899]: time="2025-11-05T16:05:21.361550413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 16:05:21.362546 kubelet[3198]: E1105 16:05:21.362497 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:05:21.362930 kubelet[3198]: E1105 16:05:21.362552 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 16:05:21.362930 kubelet[3198]: E1105 16:05:21.362634 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7k4x5_calico-system(d0a5c89c-b602-442e-811b-c3720b9add41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:21.363954 containerd[1899]: time="2025-11-05T16:05:21.363925842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 16:05:21.691171 containerd[1899]: time="2025-11-05T16:05:21.691103698Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:21.693208 containerd[1899]: time="2025-11-05T16:05:21.693151620Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 16:05:21.693363 containerd[1899]: time="2025-11-05T16:05:21.693246045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 16:05:21.693568 kubelet[3198]: E1105 16:05:21.693449 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:05:21.693568 kubelet[3198]: E1105 16:05:21.693498 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 16:05:21.693721 kubelet[3198]: E1105 16:05:21.693605 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7k4x5_calico-system(d0a5c89c-b602-442e-811b-c3720b9add41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:21.693721 kubelet[3198]: E1105 16:05:21.693648 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7k4x5" podUID="d0a5c89c-b602-442e-811b-c3720b9add41" Nov 5 16:05:22.009657 containerd[1899]: time="2025-11-05T16:05:22.009167044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 16:05:22.220067 containerd[1899]: time="2025-11-05T16:05:22.219802900Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc202849b093544ceac99fd4d85bc89166703102cfcef98e6208384d93753469\" id:\"9e2330fd67e146927f46ce2c132b9db3d32fa0e6ff04980f544a950659b92436\" pid:5868 exited_at:{seconds:1762358722 nanos:219092694}" Nov 5 16:05:22.337466 containerd[1899]: time="2025-11-05T16:05:22.337331060Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:22.348756 containerd[1899]: time="2025-11-05T16:05:22.348674765Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 16:05:22.348925 containerd[1899]: time="2025-11-05T16:05:22.348771486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 16:05:22.349151 kubelet[3198]: E1105 16:05:22.349076 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:05:22.349151 kubelet[3198]: E1105 16:05:22.349132 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 16:05:22.349550 kubelet[3198]: E1105 16:05:22.349205 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-c75ccf967-dqkw4_calico-system(34aa5cb5-d018-431d-960a-4659dc21c0b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:22.350794 containerd[1899]: time="2025-11-05T16:05:22.350694683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 16:05:22.691843 containerd[1899]: time="2025-11-05T16:05:22.691760079Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:22.693936 containerd[1899]: time="2025-11-05T16:05:22.693877045Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 16:05:22.694328 kubelet[3198]: E1105 16:05:22.694246 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:05:22.694328 kubelet[3198]: E1105 16:05:22.694290 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 16:05:22.694758 kubelet[3198]: E1105 16:05:22.694385 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-c75ccf967-dqkw4_calico-system(34aa5cb5-d018-431d-960a-4659dc21c0b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:22.694758 kubelet[3198]: E1105 16:05:22.694429 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c75ccf967-dqkw4" podUID="34aa5cb5-d018-431d-960a-4659dc21c0b7" Nov 5 16:05:22.704660 containerd[1899]: time="2025-11-05T16:05:22.693898368Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 16:05:28.003921 containerd[1899]: time="2025-11-05T16:05:28.003824530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:05:28.348594 containerd[1899]: time="2025-11-05T16:05:28.348419077Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:28.350469 containerd[1899]: time="2025-11-05T16:05:28.350418199Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:05:28.350621 containerd[1899]: time="2025-11-05T16:05:28.350515967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:05:28.350750 kubelet[3198]: E1105 16:05:28.350692 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:05:28.351442 kubelet[3198]: E1105 16:05:28.350748 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:05:28.351442 kubelet[3198]: E1105 16:05:28.351067 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8fffdb464-q5zql_calico-apiserver(436b2852-bb09-4690-8210-c17e2fe57e96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:28.351442 kubelet[3198]: E1105 16:05:28.351120 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fffdb464-q5zql" podUID="436b2852-bb09-4690-8210-c17e2fe57e96" Nov 5 16:05:29.004696 containerd[1899]: time="2025-11-05T16:05:29.004649214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 16:05:29.369769 containerd[1899]: time="2025-11-05T16:05:29.369621613Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:29.372228 containerd[1899]: time="2025-11-05T16:05:29.372170911Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 16:05:29.372495 containerd[1899]: time="2025-11-05T16:05:29.372419575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 16:05:29.372610 kubelet[3198]: E1105 16:05:29.372580 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:05:29.373373 kubelet[3198]: E1105 16:05:29.372619 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 16:05:29.373373 kubelet[3198]: E1105 16:05:29.372885 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-8fffdb464-mjcqs_calico-apiserver(96835183-cb2e-4158-994a-2b18537288b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:29.373373 kubelet[3198]: E1105 16:05:29.373015 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fffdb464-mjcqs" podUID="96835183-cb2e-4158-994a-2b18537288b4" Nov 5 16:05:29.373498 containerd[1899]: time="2025-11-05T16:05:29.372916784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 16:05:29.725997 containerd[1899]: time="2025-11-05T16:05:29.725924152Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:29.728642 containerd[1899]: time="2025-11-05T16:05:29.728561006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 16:05:29.728828 containerd[1899]: time="2025-11-05T16:05:29.728585947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 16:05:29.728918 kubelet[3198]: E1105 16:05:29.728865 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:05:29.729516 kubelet[3198]: E1105 16:05:29.728933 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 16:05:29.729516 kubelet[3198]: E1105 16:05:29.729109 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5f9c9c664f-fhtxd_calico-system(5b76ecda-67c8-4ccb-b2a9-6e4178612c50): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:29.729516 kubelet[3198]: E1105 16:05:29.729162 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f9c9c664f-fhtxd" podUID="5b76ecda-67c8-4ccb-b2a9-6e4178612c50" Nov 5 16:05:32.831865 systemd[1]: cri-containerd-4ddeebe734a273a5c91c9a5b1281ce88b4fff36303263f935b1783e805f92a03.scope: Deactivated successfully. Nov 5 16:05:32.832726 systemd[1]: cri-containerd-4ddeebe734a273a5c91c9a5b1281ce88b4fff36303263f935b1783e805f92a03.scope: Consumed 4.237s CPU time, 90.1M memory peak, 53.5M read from disk. Nov 5 16:05:32.869136 containerd[1899]: time="2025-11-05T16:05:32.869090362Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4ddeebe734a273a5c91c9a5b1281ce88b4fff36303263f935b1783e805f92a03\" id:\"4ddeebe734a273a5c91c9a5b1281ce88b4fff36303263f935b1783e805f92a03\" pid:3030 exit_status:1 exited_at:{seconds:1762358732 nanos:868645657}" Nov 5 16:05:32.869668 containerd[1899]: time="2025-11-05T16:05:32.869102097Z" level=info msg="received exit event container_id:\"4ddeebe734a273a5c91c9a5b1281ce88b4fff36303263f935b1783e805f92a03\" id:\"4ddeebe734a273a5c91c9a5b1281ce88b4fff36303263f935b1783e805f92a03\" pid:3030 exit_status:1 exited_at:{seconds:1762358732 nanos:868645657}" Nov 5 16:05:32.965099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ddeebe734a273a5c91c9a5b1281ce88b4fff36303263f935b1783e805f92a03-rootfs.mount: Deactivated successfully. Nov 5 16:05:33.004372 containerd[1899]: time="2025-11-05T16:05:33.004335501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 16:05:33.335728 systemd[1]: cri-containerd-18749c7f77223173cbb3d59752beb67552e877aa06ca9700bad89a841078514c.scope: Deactivated successfully. Nov 5 16:05:33.336139 systemd[1]: cri-containerd-18749c7f77223173cbb3d59752beb67552e877aa06ca9700bad89a841078514c.scope: Consumed 13.678s CPU time, 104.5M memory peak, 43.2M read from disk. Nov 5 16:05:33.340282 containerd[1899]: time="2025-11-05T16:05:33.340110991Z" level=info msg="received exit event container_id:\"18749c7f77223173cbb3d59752beb67552e877aa06ca9700bad89a841078514c\" id:\"18749c7f77223173cbb3d59752beb67552e877aa06ca9700bad89a841078514c\" pid:3569 exit_status:1 exited_at:{seconds:1762358733 nanos:339435131}" Nov 5 16:05:33.340501 containerd[1899]: time="2025-11-05T16:05:33.340122168Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18749c7f77223173cbb3d59752beb67552e877aa06ca9700bad89a841078514c\" id:\"18749c7f77223173cbb3d59752beb67552e877aa06ca9700bad89a841078514c\" pid:3569 exit_status:1 exited_at:{seconds:1762358733 nanos:339435131}" Nov 5 16:05:33.367859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18749c7f77223173cbb3d59752beb67552e877aa06ca9700bad89a841078514c-rootfs.mount: Deactivated successfully. Nov 5 16:05:33.430559 containerd[1899]: time="2025-11-05T16:05:33.430513975Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 16:05:33.432674 containerd[1899]: time="2025-11-05T16:05:33.432624893Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 16:05:33.432869 containerd[1899]: time="2025-11-05T16:05:33.432727195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 16:05:33.432955 kubelet[3198]: E1105 16:05:33.432900 3198 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:05:33.432955 kubelet[3198]: E1105 16:05:33.432949 3198 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 16:05:33.433460 kubelet[3198]: E1105 16:05:33.433079 3198 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-qgjqk_calico-system(c259a7b3-0c1e-4695-b558-e42d28fb4911): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 16:05:33.433460 kubelet[3198]: E1105 16:05:33.433129 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qgjqk" podUID="c259a7b3-0c1e-4695-b558-e42d28fb4911" Nov 5 16:05:34.026396 kubelet[3198]: I1105 16:05:34.026329 3198 scope.go:117] "RemoveContainer" containerID="18749c7f77223173cbb3d59752beb67552e877aa06ca9700bad89a841078514c" Nov 5 16:05:34.027136 kubelet[3198]: I1105 16:05:34.026663 3198 scope.go:117] "RemoveContainer" containerID="4ddeebe734a273a5c91c9a5b1281ce88b4fff36303263f935b1783e805f92a03" Nov 5 16:05:34.069074 containerd[1899]: time="2025-11-05T16:05:34.068920701Z" level=info msg="CreateContainer within sandbox \"9626428a66de1d72b4b6c4a536740665cee7c4b677a248b54127ea37b4a5fe0f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 5 16:05:34.069074 containerd[1899]: time="2025-11-05T16:05:34.068920727Z" level=info msg="CreateContainer within sandbox \"a23e41a7839367d38db9b7c2cef5d422ed2c5e0b182f42f7303f70dddb435098\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 5 16:05:34.122203 containerd[1899]: time="2025-11-05T16:05:34.122153587Z" level=info msg="Container 68b45d746039b991e65ce12dc6d23c306d909404eb74227c6c795ad877748caa: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:05:34.135998 containerd[1899]: time="2025-11-05T16:05:34.132149490Z" level=info msg="Container 2494cc950a5d4c7e6d110feaaa08e378625f97a25039b03d8f6b18a18234ad2f: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:05:34.156187 containerd[1899]: time="2025-11-05T16:05:34.155273474Z" level=info msg="CreateContainer within sandbox \"a23e41a7839367d38db9b7c2cef5d422ed2c5e0b182f42f7303f70dddb435098\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2494cc950a5d4c7e6d110feaaa08e378625f97a25039b03d8f6b18a18234ad2f\"" Nov 5 16:05:34.156187 containerd[1899]: time="2025-11-05T16:05:34.155856198Z" level=info msg="StartContainer for \"2494cc950a5d4c7e6d110feaaa08e378625f97a25039b03d8f6b18a18234ad2f\"" Nov 5 16:05:34.157485 containerd[1899]: time="2025-11-05T16:05:34.157449104Z" level=info msg="connecting to shim 2494cc950a5d4c7e6d110feaaa08e378625f97a25039b03d8f6b18a18234ad2f" address="unix:///run/containerd/s/1fcd1baf9256f150e1c1d0175bbd0437f6b700096669b5faec4c06d2f856682e" protocol=ttrpc version=3 Nov 5 16:05:34.169415 containerd[1899]: time="2025-11-05T16:05:34.169361141Z" level=info msg="CreateContainer within sandbox \"9626428a66de1d72b4b6c4a536740665cee7c4b677a248b54127ea37b4a5fe0f\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"68b45d746039b991e65ce12dc6d23c306d909404eb74227c6c795ad877748caa\"" Nov 5 16:05:34.171183 containerd[1899]: time="2025-11-05T16:05:34.171141882Z" level=info msg="StartContainer for \"68b45d746039b991e65ce12dc6d23c306d909404eb74227c6c795ad877748caa\"" Nov 5 16:05:34.173046 containerd[1899]: time="2025-11-05T16:05:34.173004161Z" level=info msg="connecting to shim 68b45d746039b991e65ce12dc6d23c306d909404eb74227c6c795ad877748caa" address="unix:///run/containerd/s/a9ac3224e2b220bb5bd67ae6013fcebcee03a0013fdcaac7f26c163c5229950d" protocol=ttrpc version=3 Nov 5 16:05:34.213225 systemd[1]: Started cri-containerd-2494cc950a5d4c7e6d110feaaa08e378625f97a25039b03d8f6b18a18234ad2f.scope - libcontainer container 2494cc950a5d4c7e6d110feaaa08e378625f97a25039b03d8f6b18a18234ad2f. Nov 5 16:05:34.214401 systemd[1]: Started cri-containerd-68b45d746039b991e65ce12dc6d23c306d909404eb74227c6c795ad877748caa.scope - libcontainer container 68b45d746039b991e65ce12dc6d23c306d909404eb74227c6c795ad877748caa. Nov 5 16:05:34.336209 containerd[1899]: time="2025-11-05T16:05:34.335626921Z" level=info msg="StartContainer for \"68b45d746039b991e65ce12dc6d23c306d909404eb74227c6c795ad877748caa\" returns successfully" Nov 5 16:05:34.351681 containerd[1899]: time="2025-11-05T16:05:34.350967856Z" level=info msg="StartContainer for \"2494cc950a5d4c7e6d110feaaa08e378625f97a25039b03d8f6b18a18234ad2f\" returns successfully" Nov 5 16:05:35.004835 kubelet[3198]: E1105 16:05:35.004781 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7k4x5" podUID="d0a5c89c-b602-442e-811b-c3720b9add41" Nov 5 16:05:35.194547 kubelet[3198]: E1105 16:05:35.194307 3198 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-11?timeout=10s\": context deadline exceeded" Nov 5 16:05:36.927131 systemd[1]: cri-containerd-e536fb57a55270446c435d75d3430c97a4e1aa60795d371e469b1f46a8e7d3f8.scope: Deactivated successfully. Nov 5 16:05:36.928160 systemd[1]: cri-containerd-e536fb57a55270446c435d75d3430c97a4e1aa60795d371e469b1f46a8e7d3f8.scope: Consumed 2.982s CPU time, 39.8M memory peak, 31.8M read from disk. Nov 5 16:05:36.935498 containerd[1899]: time="2025-11-05T16:05:36.935444087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e536fb57a55270446c435d75d3430c97a4e1aa60795d371e469b1f46a8e7d3f8\" id:\"e536fb57a55270446c435d75d3430c97a4e1aa60795d371e469b1f46a8e7d3f8\" pid:3029 exit_status:1 exited_at:{seconds:1762358736 nanos:933440230}" Nov 5 16:05:36.937282 containerd[1899]: time="2025-11-05T16:05:36.937239288Z" level=info msg="received exit event container_id:\"e536fb57a55270446c435d75d3430c97a4e1aa60795d371e469b1f46a8e7d3f8\" id:\"e536fb57a55270446c435d75d3430c97a4e1aa60795d371e469b1f46a8e7d3f8\" pid:3029 exit_status:1 exited_at:{seconds:1762358736 nanos:933440230}" Nov 5 16:05:36.979948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e536fb57a55270446c435d75d3430c97a4e1aa60795d371e469b1f46a8e7d3f8-rootfs.mount: Deactivated successfully. Nov 5 16:05:37.075406 kubelet[3198]: I1105 16:05:37.075378 3198 scope.go:117] "RemoveContainer" containerID="e536fb57a55270446c435d75d3430c97a4e1aa60795d371e469b1f46a8e7d3f8" Nov 5 16:05:37.081162 containerd[1899]: time="2025-11-05T16:05:37.081120060Z" level=info msg="CreateContainer within sandbox \"a5f46ccfa9ce39752bd2f53d3b3f8d5be19bc280471db271658afc27027cb928\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 5 16:05:37.113920 containerd[1899]: time="2025-11-05T16:05:37.113717492Z" level=info msg="Container 8658015cca681e6d9e58680267d55e94eded3e1aca33f89e626f5ca70f9fbea6: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:05:37.148452 containerd[1899]: time="2025-11-05T16:05:37.148218974Z" level=info msg="CreateContainer within sandbox \"a5f46ccfa9ce39752bd2f53d3b3f8d5be19bc280471db271658afc27027cb928\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8658015cca681e6d9e58680267d55e94eded3e1aca33f89e626f5ca70f9fbea6\"" Nov 5 16:05:37.149193 containerd[1899]: time="2025-11-05T16:05:37.149093433Z" level=info msg="StartContainer for \"8658015cca681e6d9e58680267d55e94eded3e1aca33f89e626f5ca70f9fbea6\"" Nov 5 16:05:37.152373 containerd[1899]: time="2025-11-05T16:05:37.152285720Z" level=info msg="connecting to shim 8658015cca681e6d9e58680267d55e94eded3e1aca33f89e626f5ca70f9fbea6" address="unix:///run/containerd/s/fe5f53508c91b257e7c79894d040264f24f545524d85eef1f311137fb788e19f" protocol=ttrpc version=3 Nov 5 16:05:37.195268 systemd[1]: Started cri-containerd-8658015cca681e6d9e58680267d55e94eded3e1aca33f89e626f5ca70f9fbea6.scope - libcontainer container 8658015cca681e6d9e58680267d55e94eded3e1aca33f89e626f5ca70f9fbea6. Nov 5 16:05:37.266639 containerd[1899]: time="2025-11-05T16:05:37.266586181Z" level=info msg="StartContainer for \"8658015cca681e6d9e58680267d55e94eded3e1aca33f89e626f5ca70f9fbea6\" returns successfully" Nov 5 16:05:38.009001 kubelet[3198]: E1105 16:05:38.008340 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c75ccf967-dqkw4" podUID="34aa5cb5-d018-431d-960a-4659dc21c0b7" Nov 5 16:05:39.003266 kubelet[3198]: E1105 16:05:39.003218 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fffdb464-q5zql" podUID="436b2852-bb09-4690-8210-c17e2fe57e96" Nov 5 16:05:40.003851 kubelet[3198]: E1105 16:05:40.003773 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fffdb464-mjcqs" podUID="96835183-cb2e-4158-994a-2b18537288b4" Nov 5 16:05:42.003693 kubelet[3198]: E1105 16:05:42.003586 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f9c9c664f-fhtxd" podUID="5b76ecda-67c8-4ccb-b2a9-6e4178612c50" Nov 5 16:05:45.196145 kubelet[3198]: E1105 16:05:45.195593 3198 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-11?timeout=10s\": context deadline exceeded" Nov 5 16:05:47.065872 systemd[1]: cri-containerd-68b45d746039b991e65ce12dc6d23c306d909404eb74227c6c795ad877748caa.scope: Deactivated successfully. Nov 5 16:05:47.067102 systemd[1]: cri-containerd-68b45d746039b991e65ce12dc6d23c306d909404eb74227c6c795ad877748caa.scope: Consumed 406ms CPU time, 67.3M memory peak, 33.1M read from disk. Nov 5 16:05:47.068135 containerd[1899]: time="2025-11-05T16:05:47.067478501Z" level=info msg="received exit event container_id:\"68b45d746039b991e65ce12dc6d23c306d909404eb74227c6c795ad877748caa\" id:\"68b45d746039b991e65ce12dc6d23c306d909404eb74227c6c795ad877748caa\" pid:5958 exit_status:1 exited_at:{seconds:1762358747 nanos:66577945}" Nov 5 16:05:47.077229 containerd[1899]: time="2025-11-05T16:05:47.077167244Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68b45d746039b991e65ce12dc6d23c306d909404eb74227c6c795ad877748caa\" id:\"68b45d746039b991e65ce12dc6d23c306d909404eb74227c6c795ad877748caa\" pid:5958 exit_status:1 exited_at:{seconds:1762358747 nanos:66577945}" Nov 5 16:05:47.095995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68b45d746039b991e65ce12dc6d23c306d909404eb74227c6c795ad877748caa-rootfs.mount: Deactivated successfully. Nov 5 16:05:48.003878 kubelet[3198]: E1105 16:05:48.003658 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-qgjqk" podUID="c259a7b3-0c1e-4695-b558-e42d28fb4911" Nov 5 16:05:48.130148 kubelet[3198]: I1105 16:05:48.130096 3198 scope.go:117] "RemoveContainer" containerID="18749c7f77223173cbb3d59752beb67552e877aa06ca9700bad89a841078514c" Nov 5 16:05:48.130323 kubelet[3198]: I1105 16:05:48.130290 3198 scope.go:117] "RemoveContainer" containerID="68b45d746039b991e65ce12dc6d23c306d909404eb74227c6c795ad877748caa" Nov 5 16:05:48.131265 kubelet[3198]: E1105 16:05:48.130595 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-65cdcdfd6d-8s7gb_tigera-operator(59d40a32-3e99-4527-9cf8-2a3105968b6b)\"" pod="tigera-operator/tigera-operator-65cdcdfd6d-8s7gb" podUID="59d40a32-3e99-4527-9cf8-2a3105968b6b" Nov 5 16:05:48.290108 containerd[1899]: time="2025-11-05T16:05:48.289995225Z" level=info msg="RemoveContainer for \"18749c7f77223173cbb3d59752beb67552e877aa06ca9700bad89a841078514c\"" Nov 5 16:05:48.357859 containerd[1899]: time="2025-11-05T16:05:48.357806299Z" level=info msg="RemoveContainer for \"18749c7f77223173cbb3d59752beb67552e877aa06ca9700bad89a841078514c\" returns successfully" Nov 5 16:05:49.003410 kubelet[3198]: E1105 16:05:49.003362 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c75ccf967-dqkw4" podUID="34aa5cb5-d018-431d-960a-4659dc21c0b7" Nov 5 16:05:49.435325 systemd[1]: Started sshd@25-172.31.16.11:22-205.210.31.149:60582.service - OpenSSH per-connection server daemon (205.210.31.149:60582). Nov 5 16:05:50.004382 kubelet[3198]: E1105 16:05:50.004325 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8fffdb464-q5zql" podUID="436b2852-bb09-4690-8210-c17e2fe57e96" Nov 5 16:05:50.005486 kubelet[3198]: E1105 16:05:50.005429 3198 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7k4x5" podUID="d0a5c89c-b602-442e-811b-c3720b9add41"