Nov 6 00:27:51.875504 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 22:12:28 -00 2025 Nov 6 00:27:51.875538 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:27:51.875554 kernel: BIOS-provided physical RAM map: Nov 6 00:27:51.875564 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 6 00:27:51.875573 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Nov 6 00:27:51.875583 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Nov 6 00:27:51.875595 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Nov 6 00:27:51.875606 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Nov 6 00:27:51.875616 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Nov 6 00:27:51.875626 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Nov 6 00:27:51.875634 kernel: NX (Execute Disable) protection: active Nov 6 00:27:51.875643 kernel: APIC: Static calls initialized Nov 6 00:27:51.875650 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Nov 6 00:27:51.875657 kernel: extended physical RAM map: Nov 6 00:27:51.875666 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 6 00:27:51.875674 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Nov 6 00:27:51.875684 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Nov 6 00:27:51.875692 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Nov 6 00:27:51.875699 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Nov 6 00:27:51.875707 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Nov 6 00:27:51.875715 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Nov 6 00:27:51.875723 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Nov 6 00:27:51.875730 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Nov 6 00:27:51.875738 kernel: efi: EFI v2.7 by EDK II Nov 6 00:27:51.875746 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77002518 Nov 6 00:27:51.875754 kernel: secureboot: Secure boot disabled Nov 6 00:27:51.875761 kernel: SMBIOS 2.7 present. Nov 6 00:27:51.875771 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Nov 6 00:27:51.875883 kernel: DMI: Memory slots populated: 1/1 Nov 6 00:27:51.875896 kernel: Hypervisor detected: KVM Nov 6 00:27:51.875907 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Nov 6 00:27:51.875920 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 6 00:27:51.875932 kernel: kvm-clock: using sched offset of 5246382807 cycles Nov 6 00:27:51.875946 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 6 00:27:51.875961 kernel: tsc: Detected 2499.996 MHz processor Nov 6 00:27:51.875973 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 00:27:51.875985 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 00:27:51.876002 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Nov 6 00:27:51.876014 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 6 00:27:51.876026 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 00:27:51.876045 kernel: Using GB pages for direct mapping Nov 6 00:27:51.876058 kernel: ACPI: Early table checksum verification disabled Nov 6 00:27:51.876070 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Nov 6 00:27:51.876083 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Nov 6 00:27:51.876099 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 6 00:27:51.876111 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Nov 6 00:27:51.876124 kernel: ACPI: FACS 0x00000000789D0000 000040 Nov 6 00:27:51.876137 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Nov 6 00:27:51.876149 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 6 00:27:51.876162 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 6 00:27:51.876174 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Nov 6 00:27:51.876190 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Nov 6 00:27:51.876202 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 6 00:27:51.876215 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 6 00:27:51.876228 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Nov 6 00:27:51.876241 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Nov 6 00:27:51.876253 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Nov 6 00:27:51.876265 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Nov 6 00:27:51.876278 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Nov 6 00:27:51.876290 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Nov 6 00:27:51.876320 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Nov 6 00:27:51.876331 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Nov 6 00:27:51.876344 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Nov 6 00:27:51.876356 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Nov 6 00:27:51.876367 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Nov 6 00:27:51.876378 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Nov 6 00:27:51.876390 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Nov 6 00:27:51.876404 kernel: NUMA: Initialized distance table, cnt=1 Nov 6 00:27:51.876417 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Nov 6 00:27:51.876434 kernel: Zone ranges: Nov 6 00:27:51.876448 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 00:27:51.876463 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Nov 6 00:27:51.876477 kernel: Normal empty Nov 6 00:27:51.876490 kernel: Device empty Nov 6 00:27:51.876504 kernel: Movable zone start for each node Nov 6 00:27:51.876517 kernel: Early memory node ranges Nov 6 00:27:51.876532 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 6 00:27:51.876545 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Nov 6 00:27:51.876560 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Nov 6 00:27:51.876576 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Nov 6 00:27:51.876591 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 00:27:51.876605 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 6 00:27:51.876619 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 6 00:27:51.876633 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Nov 6 00:27:51.876647 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 6 00:27:51.876661 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 6 00:27:51.876675 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Nov 6 00:27:51.876688 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 6 00:27:51.876705 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 6 00:27:51.876719 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 6 00:27:51.876733 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 6 00:27:51.876747 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 00:27:51.876761 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 6 00:27:51.876775 kernel: TSC deadline timer available Nov 6 00:27:51.877123 kernel: CPU topo: Max. logical packages: 1 Nov 6 00:27:51.877137 kernel: CPU topo: Max. logical dies: 1 Nov 6 00:27:51.877150 kernel: CPU topo: Max. dies per package: 1 Nov 6 00:27:51.877169 kernel: CPU topo: Max. threads per core: 2 Nov 6 00:27:51.877182 kernel: CPU topo: Num. cores per package: 1 Nov 6 00:27:51.877196 kernel: CPU topo: Num. threads per package: 2 Nov 6 00:27:51.877209 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 6 00:27:51.877222 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 6 00:27:51.877236 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Nov 6 00:27:51.877249 kernel: Booting paravirtualized kernel on KVM Nov 6 00:27:51.877263 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 00:27:51.877277 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 6 00:27:51.877290 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 6 00:27:51.877307 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 6 00:27:51.877320 kernel: pcpu-alloc: [0] 0 1 Nov 6 00:27:51.877333 kernel: kvm-guest: PV spinlocks enabled Nov 6 00:27:51.877347 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 6 00:27:51.877362 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:27:51.877376 kernel: random: crng init done Nov 6 00:27:51.877389 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 6 00:27:51.877405 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 6 00:27:51.877419 kernel: Fallback order for Node 0: 0 Nov 6 00:27:51.877432 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Nov 6 00:27:51.877446 kernel: Policy zone: DMA32 Nov 6 00:27:51.877470 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 00:27:51.877486 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 6 00:27:51.877500 kernel: Kernel/User page tables isolation: enabled Nov 6 00:27:51.877513 kernel: ftrace: allocating 40021 entries in 157 pages Nov 6 00:27:51.877527 kernel: ftrace: allocated 157 pages with 5 groups Nov 6 00:27:51.877541 kernel: Dynamic Preempt: voluntary Nov 6 00:27:51.877555 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 00:27:51.877571 kernel: rcu: RCU event tracing is enabled. Nov 6 00:27:51.877587 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 6 00:27:51.877602 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 00:27:51.877616 kernel: Rude variant of Tasks RCU enabled. Nov 6 00:27:51.877630 kernel: Tracing variant of Tasks RCU enabled. Nov 6 00:27:51.877644 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 00:27:51.877658 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 6 00:27:51.877675 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:27:51.877689 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:27:51.877704 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:27:51.877718 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 6 00:27:51.877732 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 00:27:51.877746 kernel: Console: colour dummy device 80x25 Nov 6 00:27:51.877761 kernel: printk: legacy console [tty0] enabled Nov 6 00:27:51.877775 kernel: printk: legacy console [ttyS0] enabled Nov 6 00:27:51.877804 kernel: ACPI: Core revision 20240827 Nov 6 00:27:51.877818 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Nov 6 00:27:51.877832 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 00:27:51.877846 kernel: x2apic enabled Nov 6 00:27:51.877860 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 00:27:51.877874 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Nov 6 00:27:51.877888 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Nov 6 00:27:51.877901 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 6 00:27:51.877915 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 6 00:27:51.877932 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 00:27:51.877945 kernel: Spectre V2 : Mitigation: Retpolines Nov 6 00:27:51.877958 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 6 00:27:51.877972 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 6 00:27:51.877986 kernel: RETBleed: Vulnerable Nov 6 00:27:51.877999 kernel: Speculative Store Bypass: Vulnerable Nov 6 00:27:51.878013 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Nov 6 00:27:51.878027 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 6 00:27:51.878040 kernel: GDS: Unknown: Dependent on hypervisor status Nov 6 00:27:51.878053 kernel: active return thunk: its_return_thunk Nov 6 00:27:51.878067 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 6 00:27:51.878083 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 00:27:51.878097 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 00:27:51.878111 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 00:27:51.878125 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 6 00:27:51.878138 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 6 00:27:51.878152 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 6 00:27:51.878166 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 6 00:27:51.878179 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 6 00:27:51.878193 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 6 00:27:51.878207 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 00:27:51.878221 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 6 00:27:51.878238 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 6 00:27:51.878252 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Nov 6 00:27:51.878265 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Nov 6 00:27:51.878278 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Nov 6 00:27:51.878291 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Nov 6 00:27:51.878305 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Nov 6 00:27:51.878319 kernel: Freeing SMP alternatives memory: 32K Nov 6 00:27:51.878332 kernel: pid_max: default: 32768 minimum: 301 Nov 6 00:27:51.878345 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 6 00:27:51.878359 kernel: landlock: Up and running. Nov 6 00:27:51.878372 kernel: SELinux: Initializing. Nov 6 00:27:51.878386 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 6 00:27:51.878402 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 6 00:27:51.878416 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 6 00:27:51.878429 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 6 00:27:51.878443 kernel: signal: max sigframe size: 3632 Nov 6 00:27:51.878458 kernel: rcu: Hierarchical SRCU implementation. Nov 6 00:27:51.878472 kernel: rcu: Max phase no-delay instances is 400. Nov 6 00:27:51.878485 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 6 00:27:51.878499 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 6 00:27:51.878512 kernel: smp: Bringing up secondary CPUs ... Nov 6 00:27:51.878529 kernel: smpboot: x86: Booting SMP configuration: Nov 6 00:27:51.878543 kernel: .... node #0, CPUs: #1 Nov 6 00:27:51.878558 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 6 00:27:51.878573 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 6 00:27:51.878586 kernel: smp: Brought up 1 node, 2 CPUs Nov 6 00:27:51.878600 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Nov 6 00:27:51.878614 kernel: Memory: 1901908K/2037804K available (14336K kernel code, 2436K rwdata, 26048K rodata, 45548K init, 1180K bss, 131332K reserved, 0K cma-reserved) Nov 6 00:27:51.878628 kernel: devtmpfs: initialized Nov 6 00:27:51.878642 kernel: x86/mm: Memory block size: 128MB Nov 6 00:27:51.878659 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Nov 6 00:27:51.878673 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 00:27:51.878687 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 6 00:27:51.878700 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 00:27:51.878714 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 00:27:51.878728 kernel: audit: initializing netlink subsys (disabled) Nov 6 00:27:51.878741 kernel: audit: type=2000 audit(1762388870.648:1): state=initialized audit_enabled=0 res=1 Nov 6 00:27:51.878755 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 00:27:51.878772 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 00:27:51.878811 kernel: cpuidle: using governor menu Nov 6 00:27:51.878846 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 00:27:51.878859 kernel: dca service started, version 1.12.1 Nov 6 00:27:51.878872 kernel: PCI: Using configuration type 1 for base access Nov 6 00:27:51.878885 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 00:27:51.878899 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 6 00:27:51.878913 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 6 00:27:51.878927 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 00:27:51.878947 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 00:27:51.878961 kernel: ACPI: Added _OSI(Module Device) Nov 6 00:27:51.878977 kernel: ACPI: Added _OSI(Processor Device) Nov 6 00:27:51.878991 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 00:27:51.879007 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 6 00:27:51.879023 kernel: ACPI: Interpreter enabled Nov 6 00:27:51.879038 kernel: ACPI: PM: (supports S0 S5) Nov 6 00:27:51.879052 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 00:27:51.879066 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 00:27:51.879080 kernel: PCI: Using E820 reservations for host bridge windows Nov 6 00:27:51.879097 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 6 00:27:51.879112 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 6 00:27:51.879333 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 6 00:27:51.879470 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 6 00:27:51.879605 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 6 00:27:51.879623 kernel: acpiphp: Slot [3] registered Nov 6 00:27:51.879637 kernel: acpiphp: Slot [4] registered Nov 6 00:27:51.879658 kernel: acpiphp: Slot [5] registered Nov 6 00:27:51.879673 kernel: acpiphp: Slot [6] registered Nov 6 00:27:51.879689 kernel: acpiphp: Slot [7] registered Nov 6 00:27:51.879703 kernel: acpiphp: Slot [8] registered Nov 6 00:27:51.879718 kernel: acpiphp: Slot [9] registered Nov 6 00:27:51.879731 kernel: acpiphp: Slot [10] registered Nov 6 00:27:51.879744 kernel: acpiphp: Slot [11] registered Nov 6 00:27:51.879758 kernel: acpiphp: Slot [12] registered Nov 6 00:27:51.879772 kernel: acpiphp: Slot [13] registered Nov 6 00:27:51.879808 kernel: acpiphp: Slot [14] registered Nov 6 00:27:51.879823 kernel: acpiphp: Slot [15] registered Nov 6 00:27:51.879835 kernel: acpiphp: Slot [16] registered Nov 6 00:27:51.879851 kernel: acpiphp: Slot [17] registered Nov 6 00:27:51.879866 kernel: acpiphp: Slot [18] registered Nov 6 00:27:51.879881 kernel: acpiphp: Slot [19] registered Nov 6 00:27:51.879896 kernel: acpiphp: Slot [20] registered Nov 6 00:27:51.879910 kernel: acpiphp: Slot [21] registered Nov 6 00:27:51.879926 kernel: acpiphp: Slot [22] registered Nov 6 00:27:51.879941 kernel: acpiphp: Slot [23] registered Nov 6 00:27:51.879958 kernel: acpiphp: Slot [24] registered Nov 6 00:27:51.879972 kernel: acpiphp: Slot [25] registered Nov 6 00:27:51.879986 kernel: acpiphp: Slot [26] registered Nov 6 00:27:51.880001 kernel: acpiphp: Slot [27] registered Nov 6 00:27:51.880017 kernel: acpiphp: Slot [28] registered Nov 6 00:27:51.880032 kernel: acpiphp: Slot [29] registered Nov 6 00:27:51.880047 kernel: acpiphp: Slot [30] registered Nov 6 00:27:51.880063 kernel: acpiphp: Slot [31] registered Nov 6 00:27:51.880079 kernel: PCI host bridge to bus 0000:00 Nov 6 00:27:51.880283 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 6 00:27:51.880422 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 6 00:27:51.880553 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 6 00:27:51.880683 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 6 00:27:51.881354 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Nov 6 00:27:51.881494 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 6 00:27:51.881655 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Nov 6 00:27:51.881825 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Nov 6 00:27:51.881978 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Nov 6 00:27:51.882115 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 6 00:27:51.882282 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Nov 6 00:27:51.882420 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Nov 6 00:27:51.882554 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Nov 6 00:27:51.882690 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Nov 6 00:27:51.882840 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Nov 6 00:27:51.882965 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Nov 6 00:27:51.883096 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Nov 6 00:27:51.883221 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Nov 6 00:27:51.883346 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Nov 6 00:27:51.883469 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 6 00:27:51.883620 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Nov 6 00:27:51.883746 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Nov 6 00:27:51.883892 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Nov 6 00:27:51.884041 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Nov 6 00:27:51.884064 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 6 00:27:51.884082 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 6 00:27:51.884096 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 6 00:27:51.884119 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 6 00:27:51.884133 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 6 00:27:51.884147 kernel: iommu: Default domain type: Translated Nov 6 00:27:51.884161 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 00:27:51.884176 kernel: efivars: Registered efivars operations Nov 6 00:27:51.884190 kernel: PCI: Using ACPI for IRQ routing Nov 6 00:27:51.884204 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 6 00:27:51.884219 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Nov 6 00:27:51.884239 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Nov 6 00:27:51.884256 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Nov 6 00:27:51.885931 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Nov 6 00:27:51.886087 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Nov 6 00:27:51.886243 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 6 00:27:51.886264 kernel: vgaarb: loaded Nov 6 00:27:51.886280 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Nov 6 00:27:51.886296 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Nov 6 00:27:51.886311 kernel: clocksource: Switched to clocksource kvm-clock Nov 6 00:27:51.886327 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 00:27:51.886347 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 00:27:51.886363 kernel: pnp: PnP ACPI init Nov 6 00:27:51.886377 kernel: pnp: PnP ACPI: found 5 devices Nov 6 00:27:51.886392 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 00:27:51.886408 kernel: NET: Registered PF_INET protocol family Nov 6 00:27:51.886423 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 6 00:27:51.886438 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 6 00:27:51.886453 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 00:27:51.886468 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 6 00:27:51.886486 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 6 00:27:51.886502 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 6 00:27:51.886517 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 6 00:27:51.886533 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 6 00:27:51.886549 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 00:27:51.886563 kernel: NET: Registered PF_XDP protocol family Nov 6 00:27:51.886701 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 6 00:27:51.887930 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 6 00:27:51.888080 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 6 00:27:51.888204 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 6 00:27:51.888323 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Nov 6 00:27:51.888469 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 6 00:27:51.888490 kernel: PCI: CLS 0 bytes, default 64 Nov 6 00:27:51.888506 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 6 00:27:51.888523 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Nov 6 00:27:51.888539 kernel: clocksource: Switched to clocksource tsc Nov 6 00:27:51.888552 kernel: Initialise system trusted keyrings Nov 6 00:27:51.888571 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 6 00:27:51.888589 kernel: Key type asymmetric registered Nov 6 00:27:51.888604 kernel: Asymmetric key parser 'x509' registered Nov 6 00:27:51.888618 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 6 00:27:51.888633 kernel: io scheduler mq-deadline registered Nov 6 00:27:51.888649 kernel: io scheduler kyber registered Nov 6 00:27:51.888665 kernel: io scheduler bfq registered Nov 6 00:27:51.888680 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 00:27:51.888696 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 00:27:51.888716 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 00:27:51.888732 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 6 00:27:51.888748 kernel: i8042: Warning: Keylock active Nov 6 00:27:51.888764 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 6 00:27:51.889655 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 6 00:27:51.889848 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 6 00:27:51.889982 kernel: rtc_cmos 00:00: registered as rtc0 Nov 6 00:27:51.890116 kernel: rtc_cmos 00:00: setting system clock to 2025-11-06T00:27:51 UTC (1762388871) Nov 6 00:27:51.890258 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 6 00:27:51.890303 kernel: intel_pstate: CPU model not supported Nov 6 00:27:51.890324 kernel: efifb: probing for efifb Nov 6 00:27:51.890344 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Nov 6 00:27:51.890364 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Nov 6 00:27:51.890383 kernel: efifb: scrolling: redraw Nov 6 00:27:51.890400 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 6 00:27:51.890417 kernel: Console: switching to colour frame buffer device 100x37 Nov 6 00:27:51.890437 kernel: fb0: EFI VGA frame buffer device Nov 6 00:27:51.890453 kernel: pstore: Using crash dump compression: deflate Nov 6 00:27:51.890468 kernel: pstore: Registered efi_pstore as persistent store backend Nov 6 00:27:51.890485 kernel: NET: Registered PF_INET6 protocol family Nov 6 00:27:51.890501 kernel: Segment Routing with IPv6 Nov 6 00:27:51.890515 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 00:27:51.890532 kernel: NET: Registered PF_PACKET protocol family Nov 6 00:27:51.890548 kernel: Key type dns_resolver registered Nov 6 00:27:51.890564 kernel: IPI shorthand broadcast: enabled Nov 6 00:27:51.890581 kernel: sched_clock: Marking stable (2488001658, 142743623)->(2699085022, -68339741) Nov 6 00:27:51.890600 kernel: registered taskstats version 1 Nov 6 00:27:51.890618 kernel: Loading compiled-in X.509 certificates Nov 6 00:27:51.890634 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: f906521ec29cbf079ae365554bad8eb8ed6ecb31' Nov 6 00:27:51.890651 kernel: Demotion targets for Node 0: null Nov 6 00:27:51.890668 kernel: Key type .fscrypt registered Nov 6 00:27:51.890684 kernel: Key type fscrypt-provisioning registered Nov 6 00:27:51.890701 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 00:27:51.890718 kernel: ima: Allocated hash algorithm: sha1 Nov 6 00:27:51.890735 kernel: ima: No architecture policies found Nov 6 00:27:51.890754 kernel: clk: Disabling unused clocks Nov 6 00:27:51.890770 kernel: Warning: unable to open an initial console. Nov 6 00:27:51.891909 kernel: Freeing unused kernel image (initmem) memory: 45548K Nov 6 00:27:51.891929 kernel: Write protecting the kernel read-only data: 40960k Nov 6 00:27:51.891950 kernel: Freeing unused kernel image (rodata/data gap) memory: 576K Nov 6 00:27:51.891971 kernel: Run /init as init process Nov 6 00:27:51.891987 kernel: with arguments: Nov 6 00:27:51.892004 kernel: /init Nov 6 00:27:51.892019 kernel: with environment: Nov 6 00:27:51.892034 kernel: HOME=/ Nov 6 00:27:51.892050 kernel: TERM=linux Nov 6 00:27:51.892068 systemd[1]: Successfully made /usr/ read-only. Nov 6 00:27:51.892089 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:27:51.892110 systemd[1]: Detected virtualization amazon. Nov 6 00:27:51.892127 systemd[1]: Detected architecture x86-64. Nov 6 00:27:51.892143 systemd[1]: Running in initrd. Nov 6 00:27:51.892161 systemd[1]: No hostname configured, using default hostname. Nov 6 00:27:51.892180 systemd[1]: Hostname set to . Nov 6 00:27:51.892197 systemd[1]: Initializing machine ID from VM UUID. Nov 6 00:27:51.892216 systemd[1]: Queued start job for default target initrd.target. Nov 6 00:27:51.892233 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:27:51.892255 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:27:51.892275 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 00:27:51.892292 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:27:51.892311 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 00:27:51.892331 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 00:27:51.892355 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 6 00:27:51.892377 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 6 00:27:51.892395 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:27:51.892414 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:27:51.892432 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:27:51.892451 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:27:51.892469 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:27:51.892487 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:27:51.892504 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:27:51.892524 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:27:51.892545 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 00:27:51.892563 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 00:27:51.892582 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:27:51.892601 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:27:51.892619 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:27:51.892635 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:27:51.892653 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 00:27:51.892670 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:27:51.892692 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 00:27:51.892711 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 6 00:27:51.892729 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 00:27:51.892747 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:27:51.892765 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:27:51.893823 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:27:51.893846 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 00:27:51.893903 systemd-journald[189]: Collecting audit messages is disabled. Nov 6 00:27:51.893941 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:27:51.893959 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 00:27:51.893976 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 00:27:51.893994 systemd-journald[189]: Journal started Nov 6 00:27:51.894032 systemd-journald[189]: Runtime Journal (/run/log/journal/ec296ba6d382ae9563df08e4889ae9fb) is 4.7M, max 38.1M, 33.3M free. Nov 6 00:27:51.911817 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:27:51.915215 systemd-modules-load[190]: Inserted module 'overlay' Nov 6 00:27:51.917464 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:27:51.927951 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 00:27:51.932921 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:27:51.936319 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:27:51.942928 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:27:51.961800 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 00:27:51.967823 kernel: Bridge firewalling registered Nov 6 00:27:51.969837 systemd-modules-load[190]: Inserted module 'br_netfilter' Nov 6 00:27:51.971224 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:27:51.974561 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:27:51.977834 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:27:51.980936 systemd-tmpfiles[206]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 6 00:27:51.989293 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:27:51.991763 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:27:51.995931 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 00:27:51.999865 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:27:52.005819 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 6 00:27:52.007160 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:27:52.024344 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:27:52.067013 systemd-resolved[228]: Positive Trust Anchors: Nov 6 00:27:52.067029 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:27:52.067088 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:27:52.075145 systemd-resolved[228]: Defaulting to hostname 'linux'. Nov 6 00:27:52.078515 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:27:52.079222 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:27:52.145819 kernel: SCSI subsystem initialized Nov 6 00:27:52.155810 kernel: Loading iSCSI transport class v2.0-870. Nov 6 00:27:52.166814 kernel: iscsi: registered transport (tcp) Nov 6 00:27:52.189067 kernel: iscsi: registered transport (qla4xxx) Nov 6 00:27:52.189147 kernel: QLogic iSCSI HBA Driver Nov 6 00:27:52.208862 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:27:52.230871 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:27:52.234117 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:27:52.280601 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 00:27:52.282855 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 00:27:52.338820 kernel: raid6: avx512x4 gen() 18039 MB/s Nov 6 00:27:52.356814 kernel: raid6: avx512x2 gen() 18076 MB/s Nov 6 00:27:52.374815 kernel: raid6: avx512x1 gen() 18060 MB/s Nov 6 00:27:52.392809 kernel: raid6: avx2x4 gen() 17881 MB/s Nov 6 00:27:52.410833 kernel: raid6: avx2x2 gen() 17991 MB/s Nov 6 00:27:52.429058 kernel: raid6: avx2x1 gen() 13639 MB/s Nov 6 00:27:52.429133 kernel: raid6: using algorithm avx512x2 gen() 18076 MB/s Nov 6 00:27:52.448056 kernel: raid6: .... xor() 23825 MB/s, rmw enabled Nov 6 00:27:52.448123 kernel: raid6: using avx512x2 recovery algorithm Nov 6 00:27:52.469827 kernel: xor: automatically using best checksumming function avx Nov 6 00:27:52.636814 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 00:27:52.644020 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:27:52.646252 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:27:52.673998 systemd-udevd[437]: Using default interface naming scheme 'v255'. Nov 6 00:27:52.680853 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:27:52.685456 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 00:27:52.710318 dracut-pre-trigger[444]: rd.md=0: removing MD RAID activation Nov 6 00:27:52.737854 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:27:52.739986 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:27:52.802758 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:27:52.807546 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 00:27:52.893789 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 6 00:27:52.894048 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 6 00:27:52.898809 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Nov 6 00:27:52.919806 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 00:27:52.924548 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 6 00:27:52.924840 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 6 00:27:52.930809 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:84:17:03:5d:11 Nov 6 00:27:52.935938 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:27:52.938499 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 6 00:27:52.937828 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:27:52.939025 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:27:52.943802 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 6 00:27:52.943858 kernel: GPT:9289727 != 33554431 Nov 6 00:27:52.945738 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 6 00:27:52.945797 kernel: GPT:9289727 != 33554431 Nov 6 00:27:52.946248 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:27:52.949227 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 6 00:27:52.949249 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 6 00:27:52.952458 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:27:52.961286 (udev-worker)[492]: Network interface NamePolicy= disabled on kernel command line. Nov 6 00:27:52.963326 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:27:52.963454 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:27:52.967101 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:27:52.981331 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Nov 6 00:27:52.980691 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:27:52.985811 kernel: AES CTR mode by8 optimization enabled Nov 6 00:27:53.028443 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:27:53.036809 kernel: nvme nvme0: using unchecked data buffer Nov 6 00:27:53.098646 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 6 00:27:53.145589 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 6 00:27:53.156153 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 6 00:27:53.156963 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 00:27:53.167843 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 6 00:27:53.168475 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Nov 6 00:27:53.169961 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:27:53.171200 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:27:53.172520 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:27:53.174311 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 00:27:53.178964 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 00:27:53.194614 disk-uuid[676]: Primary Header is updated. Nov 6 00:27:53.194614 disk-uuid[676]: Secondary Entries is updated. Nov 6 00:27:53.194614 disk-uuid[676]: Secondary Header is updated. Nov 6 00:27:53.200816 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:27:53.202928 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 6 00:27:54.213370 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 6 00:27:54.214843 disk-uuid[680]: The operation has completed successfully. Nov 6 00:27:54.352025 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 00:27:54.352176 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 00:27:54.380899 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 6 00:27:54.398625 sh[944]: Success Nov 6 00:27:54.419959 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 00:27:54.420050 kernel: device-mapper: uevent: version 1.0.3 Nov 6 00:27:54.420813 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 6 00:27:54.432806 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Nov 6 00:27:54.532968 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 6 00:27:54.537887 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 6 00:27:54.547927 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 6 00:27:54.570825 kernel: BTRFS: device fsid 85d805c5-984c-4a6a-aaeb-49fff3689175 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (967) Nov 6 00:27:54.574811 kernel: BTRFS info (device dm-0): first mount of filesystem 85d805c5-984c-4a6a-aaeb-49fff3689175 Nov 6 00:27:54.574878 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:27:54.613231 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 6 00:27:54.613311 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 00:27:54.613325 kernel: BTRFS info (device dm-0): enabling free space tree Nov 6 00:27:54.617076 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 6 00:27:54.618033 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:27:54.618569 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 00:27:54.619768 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 00:27:54.620719 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 00:27:54.659969 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1000) Nov 6 00:27:54.663122 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:27:54.663176 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:27:54.672868 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 6 00:27:54.672940 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 6 00:27:54.680986 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:27:54.682170 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 00:27:54.685156 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 00:27:54.735572 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:27:54.738279 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:27:54.795762 systemd-networkd[1136]: lo: Link UP Nov 6 00:27:54.795774 systemd-networkd[1136]: lo: Gained carrier Nov 6 00:27:54.797470 systemd-networkd[1136]: Enumeration completed Nov 6 00:27:54.797596 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:27:54.798479 systemd[1]: Reached target network.target - Network. Nov 6 00:27:54.798607 systemd-networkd[1136]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:27:54.798612 systemd-networkd[1136]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:27:54.802609 systemd-networkd[1136]: eth0: Link UP Nov 6 00:27:54.802615 systemd-networkd[1136]: eth0: Gained carrier Nov 6 00:27:54.802631 systemd-networkd[1136]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:27:54.813972 systemd-networkd[1136]: eth0: DHCPv4 address 172.31.28.191/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 6 00:27:55.020428 ignition[1067]: Ignition 2.22.0 Nov 6 00:27:55.020440 ignition[1067]: Stage: fetch-offline Nov 6 00:27:55.020612 ignition[1067]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:27:55.020619 ignition[1067]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 6 00:27:55.020972 ignition[1067]: Ignition finished successfully Nov 6 00:27:55.022936 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:27:55.024468 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 6 00:27:55.056441 ignition[1146]: Ignition 2.22.0 Nov 6 00:27:55.056459 ignition[1146]: Stage: fetch Nov 6 00:27:55.056845 ignition[1146]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:27:55.056858 ignition[1146]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 6 00:27:55.056963 ignition[1146]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 6 00:27:55.072917 ignition[1146]: PUT result: OK Nov 6 00:27:55.075429 ignition[1146]: parsed url from cmdline: "" Nov 6 00:27:55.075441 ignition[1146]: no config URL provided Nov 6 00:27:55.075448 ignition[1146]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 00:27:55.075460 ignition[1146]: no config at "/usr/lib/ignition/user.ign" Nov 6 00:27:55.075488 ignition[1146]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 6 00:27:55.076351 ignition[1146]: PUT result: OK Nov 6 00:27:55.076414 ignition[1146]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 6 00:27:55.076979 ignition[1146]: GET result: OK Nov 6 00:27:55.077050 ignition[1146]: parsing config with SHA512: 73bcb4a16ded2391afca7b3f55dc514f63a3e307bc1ed8a031fba6ee73c72830010eee5d6c95fec4986adc1c68c78020b3d2781387eb5212463c32bd267aedab Nov 6 00:27:55.082455 unknown[1146]: fetched base config from "system" Nov 6 00:27:55.082465 unknown[1146]: fetched base config from "system" Nov 6 00:27:55.082931 ignition[1146]: fetch: fetch complete Nov 6 00:27:55.082471 unknown[1146]: fetched user config from "aws" Nov 6 00:27:55.082936 ignition[1146]: fetch: fetch passed Nov 6 00:27:55.082978 ignition[1146]: Ignition finished successfully Nov 6 00:27:55.085093 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 6 00:27:55.086681 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 00:27:55.122949 ignition[1153]: Ignition 2.22.0 Nov 6 00:27:55.122965 ignition[1153]: Stage: kargs Nov 6 00:27:55.123333 ignition[1153]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:27:55.123345 ignition[1153]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 6 00:27:55.123452 ignition[1153]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 6 00:27:55.124468 ignition[1153]: PUT result: OK Nov 6 00:27:55.126929 ignition[1153]: kargs: kargs passed Nov 6 00:27:55.127021 ignition[1153]: Ignition finished successfully Nov 6 00:27:55.129116 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 00:27:55.130919 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 00:27:55.164410 ignition[1160]: Ignition 2.22.0 Nov 6 00:27:55.164427 ignition[1160]: Stage: disks Nov 6 00:27:55.164840 ignition[1160]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:27:55.164854 ignition[1160]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 6 00:27:55.164971 ignition[1160]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 6 00:27:55.165818 ignition[1160]: PUT result: OK Nov 6 00:27:55.168110 ignition[1160]: disks: disks passed Nov 6 00:27:55.168189 ignition[1160]: Ignition finished successfully Nov 6 00:27:55.170324 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 00:27:55.170976 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 00:27:55.171361 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 00:27:55.172094 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:27:55.172643 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:27:55.173226 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:27:55.174889 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 00:27:55.222948 systemd-fsck[1169]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 6 00:27:55.225634 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 00:27:55.227824 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 00:27:55.390938 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 25ee01aa-0270-4de7-b5da-d8936d968d16 r/w with ordered data mode. Quota mode: none. Nov 6 00:27:55.391108 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 00:27:55.392108 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 00:27:55.394597 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:27:55.397870 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 00:27:55.399553 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 6 00:27:55.400379 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 00:27:55.400407 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:27:55.405482 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 00:27:55.407455 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 00:27:55.422822 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1188) Nov 6 00:27:55.427822 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:27:55.427888 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:27:55.436236 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 6 00:27:55.436301 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 6 00:27:55.438460 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:27:55.659680 initrd-setup-root[1212]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 00:27:55.676861 initrd-setup-root[1219]: cut: /sysroot/etc/group: No such file or directory Nov 6 00:27:55.682578 initrd-setup-root[1226]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 00:27:55.687245 initrd-setup-root[1233]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 00:27:55.907035 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 00:27:55.909309 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 00:27:55.912928 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 00:27:55.927366 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 00:27:55.929853 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:27:55.961440 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 00:27:55.968495 ignition[1301]: INFO : Ignition 2.22.0 Nov 6 00:27:55.968495 ignition[1301]: INFO : Stage: mount Nov 6 00:27:55.970143 ignition[1301]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:27:55.970143 ignition[1301]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 6 00:27:55.970143 ignition[1301]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 6 00:27:55.971665 ignition[1301]: INFO : PUT result: OK Nov 6 00:27:55.972754 ignition[1301]: INFO : mount: mount passed Nov 6 00:27:55.974189 ignition[1301]: INFO : Ignition finished successfully Nov 6 00:27:55.974920 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 00:27:55.976460 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 00:27:55.998800 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:27:56.027970 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1313) Nov 6 00:27:56.030954 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:27:56.031021 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:27:56.039152 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 6 00:27:56.039219 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 6 00:27:56.041416 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:27:56.074181 ignition[1330]: INFO : Ignition 2.22.0 Nov 6 00:27:56.074181 ignition[1330]: INFO : Stage: files Nov 6 00:27:56.075792 ignition[1330]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:27:56.075792 ignition[1330]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 6 00:27:56.075792 ignition[1330]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 6 00:27:56.075792 ignition[1330]: INFO : PUT result: OK Nov 6 00:27:56.077974 ignition[1330]: DEBUG : files: compiled without relabeling support, skipping Nov 6 00:27:56.078703 ignition[1330]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 00:27:56.078703 ignition[1330]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 00:27:56.082236 ignition[1330]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 00:27:56.082834 ignition[1330]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 00:27:56.082834 ignition[1330]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 00:27:56.082583 unknown[1330]: wrote ssh authorized keys file for user: core Nov 6 00:27:56.085872 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:27:56.086468 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 6 00:27:56.169692 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 00:27:56.322941 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:27:56.322941 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 6 00:27:56.333964 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 00:27:56.333964 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:27:56.333964 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:27:56.333964 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:27:56.333964 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:27:56.333964 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:27:56.333964 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:27:56.333964 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:27:56.333964 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:27:56.333964 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:27:56.347004 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:27:56.347004 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:27:56.347004 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 6 00:27:56.642940 systemd-networkd[1136]: eth0: Gained IPv6LL Nov 6 00:27:56.809324 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 6 00:27:57.333833 ignition[1330]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:27:57.333833 ignition[1330]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 6 00:27:57.336281 ignition[1330]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:27:57.340073 ignition[1330]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:27:57.340073 ignition[1330]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 6 00:27:57.340073 ignition[1330]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 6 00:27:57.342425 ignition[1330]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 00:27:57.342425 ignition[1330]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:27:57.342425 ignition[1330]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:27:57.342425 ignition[1330]: INFO : files: files passed Nov 6 00:27:57.342425 ignition[1330]: INFO : Ignition finished successfully Nov 6 00:27:57.342468 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 00:27:57.344626 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 00:27:57.349602 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 00:27:57.362584 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 00:27:57.362696 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 00:27:57.368548 initrd-setup-root-after-ignition[1361]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:27:57.368548 initrd-setup-root-after-ignition[1361]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:27:57.371379 initrd-setup-root-after-ignition[1365]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:27:57.371740 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:27:57.373084 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 00:27:57.374418 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 00:27:57.430509 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 00:27:57.430668 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 00:27:57.432073 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 00:27:57.433165 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 00:27:57.434017 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 00:27:57.435180 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 00:27:57.475166 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:27:57.477083 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 00:27:57.498355 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:27:57.498982 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:27:57.499545 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 00:27:57.501287 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 00:27:57.502420 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:27:57.503408 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 00:27:57.504156 systemd[1]: Stopped target basic.target - Basic System. Nov 6 00:27:57.504916 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 00:27:57.505663 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:27:57.506430 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 00:27:57.507134 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:27:57.507989 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 00:27:57.508771 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:27:57.509574 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 00:27:57.510623 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 00:27:57.511370 systemd[1]: Stopped target swap.target - Swaps. Nov 6 00:27:57.512202 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 00:27:57.512429 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:27:57.513433 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:27:57.514236 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:27:57.514906 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 00:27:57.515198 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:27:57.515821 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 00:27:57.516039 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 00:27:57.517347 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 00:27:57.517622 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:27:57.518317 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 00:27:57.518505 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 00:27:57.521906 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 00:27:57.522407 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 00:27:57.522629 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:27:57.525902 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 00:27:57.528913 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 00:27:57.529121 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:27:57.530257 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 00:27:57.530456 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:27:57.536666 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 00:27:57.538890 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 00:27:57.562508 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 00:27:57.566817 ignition[1385]: INFO : Ignition 2.22.0 Nov 6 00:27:57.566817 ignition[1385]: INFO : Stage: umount Nov 6 00:27:57.566817 ignition[1385]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:27:57.566817 ignition[1385]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 6 00:27:57.566817 ignition[1385]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 6 00:27:57.572038 ignition[1385]: INFO : PUT result: OK Nov 6 00:27:57.572038 ignition[1385]: INFO : umount: umount passed Nov 6 00:27:57.572038 ignition[1385]: INFO : Ignition finished successfully Nov 6 00:27:57.570839 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 00:27:57.570984 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 00:27:57.575033 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 00:27:57.575202 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 00:27:57.576200 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 00:27:57.576264 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 00:27:57.576710 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 00:27:57.576771 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 00:27:57.577372 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 6 00:27:57.577431 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 6 00:27:57.578023 systemd[1]: Stopped target network.target - Network. Nov 6 00:27:57.578595 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 00:27:57.578662 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:27:57.579280 systemd[1]: Stopped target paths.target - Path Units. Nov 6 00:27:57.580018 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 00:27:57.584945 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:27:57.585387 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 00:27:57.586277 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 00:27:57.586937 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 00:27:57.586985 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:27:57.587809 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 00:27:57.587855 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:27:57.588460 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 00:27:57.588527 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 00:27:57.589326 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 00:27:57.589376 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 00:27:57.589964 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 00:27:57.590031 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 00:27:57.590663 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 00:27:57.591263 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 00:27:57.597452 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 00:27:57.597568 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 00:27:57.601219 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 6 00:27:57.602860 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 00:27:57.602941 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:27:57.605335 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:27:57.605583 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 00:27:57.605695 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 00:27:57.607689 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 6 00:27:57.608430 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 6 00:27:57.609087 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 00:27:57.609127 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:27:57.610976 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 00:27:57.612316 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 00:27:57.612392 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:27:57.613041 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 00:27:57.613105 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:27:57.617174 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 00:27:57.617262 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 00:27:57.618038 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:27:57.620626 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 00:27:57.637222 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 00:27:57.637382 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:27:57.639846 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 00:27:57.639909 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 00:27:57.640658 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 00:27:57.640691 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:27:57.641285 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 00:27:57.641334 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:27:57.642324 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 00:27:57.642370 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 00:27:57.643585 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 00:27:57.643652 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:27:57.646939 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 00:27:57.647816 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 6 00:27:57.648202 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:27:57.649357 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 00:27:57.649745 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:27:57.650884 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:27:57.650947 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:27:57.652634 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 00:27:57.652934 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 00:27:57.661105 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 00:27:57.661238 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 00:27:57.662402 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 00:27:57.664187 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 00:27:57.683088 systemd[1]: Switching root. Nov 6 00:27:57.718752 systemd-journald[189]: Journal stopped Nov 6 00:28:00.199602 systemd-journald[189]: Received SIGTERM from PID 1 (systemd). Nov 6 00:28:00.199703 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 00:28:00.199726 kernel: SELinux: policy capability open_perms=1 Nov 6 00:28:00.199744 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 00:28:00.199764 kernel: SELinux: policy capability always_check_network=0 Nov 6 00:28:00.206099 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 00:28:00.206233 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 00:28:00.206258 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 00:28:00.206279 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 00:28:00.206298 kernel: SELinux: policy capability userspace_initial_context=0 Nov 6 00:28:00.206331 kernel: audit: type=1403 audit(1762388878.091:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 00:28:00.206362 systemd[1]: Successfully loaded SELinux policy in 93.767ms. Nov 6 00:28:00.206404 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.105ms. Nov 6 00:28:00.206428 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:28:00.206507 systemd[1]: Detected virtualization amazon. Nov 6 00:28:00.206536 systemd[1]: Detected architecture x86-64. Nov 6 00:28:00.206558 systemd[1]: Detected first boot. Nov 6 00:28:00.206580 systemd[1]: Initializing machine ID from VM UUID. Nov 6 00:28:00.206602 zram_generator::config[1428]: No configuration found. Nov 6 00:28:00.206627 kernel: Guest personality initialized and is inactive Nov 6 00:28:00.206647 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 6 00:28:00.206668 kernel: Initialized host personality Nov 6 00:28:00.206687 kernel: NET: Registered PF_VSOCK protocol family Nov 6 00:28:00.206707 systemd[1]: Populated /etc with preset unit settings. Nov 6 00:28:00.206731 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 6 00:28:00.206753 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 00:28:00.206775 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 00:28:00.206970 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 00:28:00.207039 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 00:28:00.207062 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 00:28:00.207084 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 00:28:00.207106 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 00:28:00.207128 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 00:28:00.207149 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 00:28:00.207170 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 00:28:00.207196 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 00:28:00.207217 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:28:00.207239 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:28:00.207343 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 00:28:00.207368 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 00:28:00.207522 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 00:28:00.207545 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:28:00.207567 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 00:28:00.207591 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:28:00.207613 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:28:00.207634 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 00:28:00.207655 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 00:28:00.207678 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 00:28:00.207701 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 00:28:00.207725 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:28:00.207747 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:28:00.207768 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:28:00.207808 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:28:00.207830 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 00:28:00.207912 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 00:28:00.207937 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 00:28:00.207958 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:28:00.207980 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:28:00.208001 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:28:00.208023 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 00:28:00.208044 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 00:28:00.208069 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 00:28:00.208091 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 00:28:00.208118 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:28:00.208139 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 00:28:00.208161 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 00:28:00.208182 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 00:28:00.208205 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 00:28:00.208226 systemd[1]: Reached target machines.target - Containers. Nov 6 00:28:00.208247 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 00:28:00.208273 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:28:00.208295 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:28:00.208316 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 00:28:00.208337 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:28:00.208404 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:28:00.208426 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:28:00.208447 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 00:28:00.208469 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:28:00.208495 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 00:28:00.208516 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 00:28:00.208538 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 00:28:00.208559 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 00:28:00.208580 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 00:28:00.208602 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:28:00.208624 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:28:00.208645 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:28:00.208667 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:28:00.208691 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 00:28:00.208712 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 00:28:00.208734 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:28:00.208755 systemd[1]: verity-setup.service: Deactivated successfully. Nov 6 00:28:00.257615 systemd[1]: Stopped verity-setup.service. Nov 6 00:28:00.257695 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:28:00.257719 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 00:28:00.257742 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 00:28:00.257764 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 00:28:00.257816 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 00:28:00.257841 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 00:28:00.257863 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 00:28:00.257885 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:28:00.257907 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 00:28:00.257928 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 00:28:00.257949 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:28:00.257975 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:28:00.257997 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:28:00.258022 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:28:00.258045 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:28:00.258067 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:28:00.258089 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 00:28:00.258110 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:28:00.258134 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 00:28:00.258157 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 00:28:00.258179 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:28:00.258201 kernel: loop: module loaded Nov 6 00:28:00.258228 kernel: fuse: init (API version 7.41) Nov 6 00:28:00.258252 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 00:28:00.258273 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 00:28:00.258353 systemd-journald[1518]: Collecting audit messages is disabled. Nov 6 00:28:00.258395 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:28:00.258421 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 00:28:00.258443 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:28:00.258531 systemd-journald[1518]: Journal started Nov 6 00:28:00.258579 systemd-journald[1518]: Runtime Journal (/run/log/journal/ec296ba6d382ae9563df08e4889ae9fb) is 4.7M, max 38.1M, 33.3M free. Nov 6 00:27:59.181423 systemd[1]: Queued start job for default target multi-user.target. Nov 6 00:28:00.283265 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 00:28:00.283341 kernel: ACPI: bus type drm_connector registered Nov 6 00:27:59.194041 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 6 00:27:59.194449 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 00:28:00.293825 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:28:00.330210 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 00:28:00.355478 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:28:00.369912 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 00:28:00.383541 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:28:00.387687 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:28:00.394557 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 00:28:00.395872 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 00:28:00.407715 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:28:00.408828 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:28:00.424668 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 00:28:00.495775 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 00:28:00.526987 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:28:00.539382 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 00:28:00.560009 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 00:28:00.568002 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 00:28:00.619486 kernel: loop0: detected capacity change from 0 to 72368 Nov 6 00:28:00.584011 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 00:28:00.642809 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 00:28:00.644930 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:28:00.651890 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 00:28:00.669818 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 00:28:00.697966 systemd-journald[1518]: Time spent on flushing to /var/log/journal/ec296ba6d382ae9563df08e4889ae9fb is 176.124ms for 1021 entries. Nov 6 00:28:00.697966 systemd-journald[1518]: System Journal (/var/log/journal/ec296ba6d382ae9563df08e4889ae9fb) is 8M, max 195.6M, 187.6M free. Nov 6 00:28:00.896329 systemd-journald[1518]: Received client request to flush runtime journal. Nov 6 00:28:00.896417 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 00:28:00.896446 kernel: loop1: detected capacity change from 0 to 128016 Nov 6 00:28:00.706862 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:28:00.899325 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 00:28:00.927200 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 00:28:00.935875 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 00:28:00.943594 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 00:28:00.951065 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:28:00.972818 kernel: loop2: detected capacity change from 0 to 229808 Nov 6 00:28:00.998810 systemd-tmpfiles[1580]: ACLs are not supported, ignoring. Nov 6 00:28:00.998839 systemd-tmpfiles[1580]: ACLs are not supported, ignoring. Nov 6 00:28:01.005981 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:28:01.107818 kernel: loop3: detected capacity change from 0 to 110984 Nov 6 00:28:01.209928 kernel: loop4: detected capacity change from 0 to 72368 Nov 6 00:28:01.238905 kernel: loop5: detected capacity change from 0 to 128016 Nov 6 00:28:01.260860 kernel: loop6: detected capacity change from 0 to 229808 Nov 6 00:28:01.293813 kernel: loop7: detected capacity change from 0 to 110984 Nov 6 00:28:01.314008 (sd-merge)[1587]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Nov 6 00:28:01.314727 (sd-merge)[1587]: Merged extensions into '/usr'. Nov 6 00:28:01.323210 systemd[1]: Reload requested from client PID 1543 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 00:28:01.323575 systemd[1]: Reloading... Nov 6 00:28:01.482871 zram_generator::config[1622]: No configuration found. Nov 6 00:28:01.824005 systemd[1]: Reloading finished in 499 ms. Nov 6 00:28:01.846370 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 00:28:01.853056 systemd[1]: Starting ensure-sysext.service... Nov 6 00:28:01.856957 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:28:01.887451 systemd-tmpfiles[1665]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 6 00:28:01.887493 systemd-tmpfiles[1665]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 6 00:28:01.888273 systemd-tmpfiles[1665]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 00:28:01.888732 systemd-tmpfiles[1665]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 00:28:01.890066 systemd-tmpfiles[1665]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 00:28:01.890613 systemd-tmpfiles[1665]: ACLs are not supported, ignoring. Nov 6 00:28:01.890772 systemd-tmpfiles[1665]: ACLs are not supported, ignoring. Nov 6 00:28:01.894484 systemd[1]: Reload requested from client PID 1664 ('systemctl') (unit ensure-sysext.service)... Nov 6 00:28:01.894501 systemd[1]: Reloading... Nov 6 00:28:01.900151 systemd-tmpfiles[1665]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:28:01.900168 systemd-tmpfiles[1665]: Skipping /boot Nov 6 00:28:01.927202 systemd-tmpfiles[1665]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:28:01.927218 systemd-tmpfiles[1665]: Skipping /boot Nov 6 00:28:02.010820 zram_generator::config[1690]: No configuration found. Nov 6 00:28:02.089862 ldconfig[1539]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 00:28:02.228158 systemd[1]: Reloading finished in 332 ms. Nov 6 00:28:02.255828 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 00:28:02.256638 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 00:28:02.268746 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:28:02.279027 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:28:02.282907 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 00:28:02.290979 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 00:28:02.296342 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:28:02.301120 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:28:02.305110 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 00:28:02.319241 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:28:02.319568 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:28:02.321199 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:28:02.332224 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:28:02.343916 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:28:02.344758 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:28:02.345062 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:28:02.345210 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:28:02.352108 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 00:28:02.362330 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:28:02.365167 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:28:02.368739 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:28:02.370812 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:28:02.371128 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:28:02.371280 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:28:02.371457 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:28:02.380224 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:28:02.380623 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:28:02.391018 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:28:02.399304 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:28:02.402106 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:28:02.402333 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:28:02.402616 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 00:28:02.403464 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:28:02.405714 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 00:28:02.407277 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:28:02.408159 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:28:02.409731 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:28:02.410766 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:28:02.419181 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:28:02.422460 systemd[1]: Finished ensure-sysext.service. Nov 6 00:28:02.446412 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 00:28:02.451637 systemd-udevd[1753]: Using default interface naming scheme 'v255'. Nov 6 00:28:02.453048 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 00:28:02.459228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:28:02.461255 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:28:02.463490 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:28:02.463751 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:28:02.466552 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:28:02.484530 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 00:28:02.493636 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 00:28:02.516578 augenrules[1794]: No rules Nov 6 00:28:02.520474 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:28:02.520912 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:28:02.539918 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:28:02.544255 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:28:02.550954 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 00:28:02.552375 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 00:28:02.677858 systemd-resolved[1751]: Positive Trust Anchors: Nov 6 00:28:02.679840 systemd-resolved[1751]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:28:02.679900 systemd-resolved[1751]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:28:02.689236 systemd-resolved[1751]: Defaulting to hostname 'linux'. Nov 6 00:28:02.694188 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:28:02.696233 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:28:02.696902 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:28:02.697997 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 00:28:02.700926 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 00:28:02.701525 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 6 00:28:02.702346 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 00:28:02.703024 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 00:28:02.703565 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 00:28:02.704877 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 00:28:02.704928 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:28:02.705416 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:28:02.709016 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 00:28:02.713986 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 00:28:02.728444 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 00:28:02.730152 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 00:28:02.731292 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 00:28:02.744187 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 00:28:02.745296 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 00:28:02.748612 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 00:28:02.753812 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 00:28:02.753869 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:28:02.755892 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:28:02.756594 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:28:02.756627 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:28:02.762080 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 6 00:28:02.767056 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 00:28:02.772035 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 00:28:02.776007 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 00:28:02.779527 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 00:28:02.780882 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 00:28:02.784502 systemd-networkd[1803]: lo: Link UP Nov 6 00:28:02.784514 systemd-networkd[1803]: lo: Gained carrier Nov 6 00:28:02.788715 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 6 00:28:02.792963 systemd-networkd[1803]: Enumeration completed Nov 6 00:28:02.797105 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 00:28:02.804759 systemd[1]: Started ntpd.service - Network Time Service. Nov 6 00:28:02.816024 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 00:28:02.823414 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 6 00:28:02.831965 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 00:28:02.857028 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 00:28:02.867489 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 00:28:02.872206 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 00:28:02.875352 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 00:28:02.877062 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 00:28:02.887112 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 00:28:02.895615 jq[1835]: false Nov 6 00:28:02.889538 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:28:02.914036 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 00:28:02.915088 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 00:28:02.917111 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 00:28:02.927131 systemd[1]: Reached target network.target - Network. Nov 6 00:28:02.935811 google_oslogin_nss_cache[1837]: oslogin_cache_refresh[1837]: Refreshing passwd entry cache Nov 6 00:28:02.930558 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 00:28:02.928544 oslogin_cache_refresh[1837]: Refreshing passwd entry cache Nov 6 00:28:02.937070 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 00:28:02.945409 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 00:28:02.947305 extend-filesystems[1836]: Found /dev/nvme0n1p6 Nov 6 00:28:02.969211 (udev-worker)[1813]: Network interface NamePolicy= disabled on kernel command line. Nov 6 00:28:02.965074 oslogin_cache_refresh[1837]: Failure getting users, quitting Nov 6 00:28:03.033262 google_oslogin_nss_cache[1837]: oslogin_cache_refresh[1837]: Failure getting users, quitting Nov 6 00:28:03.033262 google_oslogin_nss_cache[1837]: oslogin_cache_refresh[1837]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:28:03.033262 google_oslogin_nss_cache[1837]: oslogin_cache_refresh[1837]: Refreshing group entry cache Nov 6 00:28:03.033262 google_oslogin_nss_cache[1837]: oslogin_cache_refresh[1837]: Failure getting groups, quitting Nov 6 00:28:03.033262 google_oslogin_nss_cache[1837]: oslogin_cache_refresh[1837]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:28:03.033441 update_engine[1848]: I20251106 00:28:02.993192 1848 main.cc:92] Flatcar Update Engine starting Nov 6 00:28:03.033649 extend-filesystems[1836]: Found /dev/nvme0n1p9 Nov 6 00:28:03.033649 extend-filesystems[1836]: Checking size of /dev/nvme0n1p9 Nov 6 00:28:03.051809 jq[1849]: true Nov 6 00:28:02.982256 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 6 00:28:02.965098 oslogin_cache_refresh[1837]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:28:02.983292 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 6 00:28:02.966523 oslogin_cache_refresh[1837]: Refreshing group entry cache Nov 6 00:28:02.994454 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 00:28:02.980067 oslogin_cache_refresh[1837]: Failure getting groups, quitting Nov 6 00:28:02.995869 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 00:28:02.980083 oslogin_cache_refresh[1837]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:28:03.059495 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 00:28:03.078965 jq[1875]: true Nov 6 00:28:03.088180 dbus-daemon[1832]: [system] SELinux support is enabled Nov 6 00:28:03.088384 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 00:28:03.095985 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 00:28:03.096261 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 00:28:03.097413 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 00:28:03.097437 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 00:28:03.106629 tar[1857]: linux-amd64/LICENSE Nov 6 00:28:03.106629 tar[1857]: linux-amd64/helm Nov 6 00:28:03.116561 (ntainerd)[1886]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 00:28:03.122318 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 00:28:03.135419 extend-filesystems[1836]: Resized partition /dev/nvme0n1p9 Nov 6 00:28:03.137936 update_engine[1848]: I20251106 00:28:03.131040 1848 update_check_scheduler.cc:74] Next update check in 3m2s Nov 6 00:28:03.122609 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 00:28:03.127560 systemd[1]: Started update-engine.service - Update Engine. Nov 6 00:28:03.132694 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 00:28:03.147766 extend-filesystems[1901]: resize2fs 1.47.3 (8-Jul-2025) Nov 6 00:28:03.165436 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 6 00:28:03.169468 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Nov 6 00:28:03.277218 coreos-metadata[1831]: Nov 06 00:28:03.274 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 6 00:28:03.284903 systemd-networkd[1803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:28:03.284918 systemd-networkd[1803]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:28:03.304283 systemd-networkd[1803]: eth0: Link UP Nov 6 00:28:03.306227 systemd-networkd[1803]: eth0: Gained carrier Nov 6 00:28:03.306263 systemd-networkd[1803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:28:03.327704 systemd-networkd[1803]: eth0: DHCPv4 address 172.31.28.191/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 6 00:28:03.328112 dbus-daemon[1832]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1803 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 6 00:28:03.337393 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 6 00:28:03.343587 bash[1918]: Updated "/home/core/.ssh/authorized_keys" Nov 6 00:28:03.341513 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 00:28:03.343827 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 00:28:03.351295 systemd[1]: Starting sshkeys.service... Nov 6 00:28:03.367962 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Nov 6 00:28:03.414406 extend-filesystems[1901]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 6 00:28:03.414406 extend-filesystems[1901]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 6 00:28:03.414406 extend-filesystems[1901]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Nov 6 00:28:03.427361 extend-filesystems[1836]: Resized filesystem in /dev/nvme0n1p9 Nov 6 00:28:03.415966 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 00:28:03.416234 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 00:28:03.456364 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 6 00:28:03.499940 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 6 00:28:03.581061 ntpd[1839]: ntpd 4.2.8p18@1.4062-o Wed Nov 5 21:31:10 UTC 2025 (1): Starting Nov 6 00:28:03.582278 ntpd[1839]: 6 Nov 00:28:03 ntpd[1839]: ntpd 4.2.8p18@1.4062-o Wed Nov 5 21:31:10 UTC 2025 (1): Starting Nov 6 00:28:03.582278 ntpd[1839]: 6 Nov 00:28:03 ntpd[1839]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 6 00:28:03.582278 ntpd[1839]: 6 Nov 00:28:03 ntpd[1839]: ---------------------------------------------------- Nov 6 00:28:03.582278 ntpd[1839]: 6 Nov 00:28:03 ntpd[1839]: ntp-4 is maintained by Network Time Foundation, Nov 6 00:28:03.582278 ntpd[1839]: 6 Nov 00:28:03 ntpd[1839]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 6 00:28:03.582278 ntpd[1839]: 6 Nov 00:28:03 ntpd[1839]: corporation. Support and training for ntp-4 are Nov 6 00:28:03.582278 ntpd[1839]: 6 Nov 00:28:03 ntpd[1839]: available at https://www.nwtime.org/support Nov 6 00:28:03.582278 ntpd[1839]: 6 Nov 00:28:03 ntpd[1839]: ---------------------------------------------------- Nov 6 00:28:03.581146 ntpd[1839]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 6 00:28:03.581159 ntpd[1839]: ---------------------------------------------------- Nov 6 00:28:03.581168 ntpd[1839]: ntp-4 is maintained by Network Time Foundation, Nov 6 00:28:03.581178 ntpd[1839]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 6 00:28:03.581188 ntpd[1839]: corporation. Support and training for ntp-4 are Nov 6 00:28:03.581198 ntpd[1839]: available at https://www.nwtime.org/support Nov 6 00:28:03.581207 ntpd[1839]: ---------------------------------------------------- Nov 6 00:28:03.592773 ntpd[1839]: proto: precision = 0.061 usec (-24) Nov 6 00:28:03.592923 ntpd[1839]: 6 Nov 00:28:03 ntpd[1839]: proto: precision = 0.061 usec (-24) Nov 6 00:28:03.597017 ntpd[1839]: basedate set to 2025-10-24 Nov 6 00:28:03.597960 ntpd[1839]: 6 Nov 00:28:03 ntpd[1839]: basedate set to 2025-10-24 Nov 6 00:28:03.597960 ntpd[1839]: 6 Nov 00:28:03 ntpd[1839]: gps base set to 2025-10-26 (week 2390) Nov 6 00:28:03.597960 ntpd[1839]: 6 Nov 00:28:03 ntpd[1839]: Listen and drop on 0 v6wildcard [::]:123 Nov 6 00:28:03.597960 ntpd[1839]: 6 Nov 00:28:03 ntpd[1839]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 6 00:28:03.597960 ntpd[1839]: 6 Nov 00:28:03 ntpd[1839]: Listen normally on 2 lo 127.0.0.1:123 Nov 6 00:28:03.597960 ntpd[1839]: 6 Nov 00:28:03 ntpd[1839]: Listen normally on 3 eth0 172.31.28.191:123 Nov 6 00:28:03.597960 ntpd[1839]: 6 Nov 00:28:03 ntpd[1839]: Listen normally on 4 lo [::1]:123 Nov 6 00:28:03.597960 ntpd[1839]: 6 Nov 00:28:03 ntpd[1839]: bind(21) AF_INET6 [fe80::484:17ff:fe03:5d11%2]:123 flags 0x811 failed: Cannot assign requested address Nov 6 00:28:03.597960 ntpd[1839]: 6 Nov 00:28:03 ntpd[1839]: unable to create socket on eth0 (5) for [fe80::484:17ff:fe03:5d11%2]:123 Nov 6 00:28:03.597046 ntpd[1839]: gps base set to 2025-10-26 (week 2390) Nov 6 00:28:03.597203 ntpd[1839]: Listen and drop on 0 v6wildcard [::]:123 Nov 6 00:28:03.597235 ntpd[1839]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 6 00:28:03.597440 ntpd[1839]: Listen normally on 2 lo 127.0.0.1:123 Nov 6 00:28:03.597468 ntpd[1839]: Listen normally on 3 eth0 172.31.28.191:123 Nov 6 00:28:03.597497 ntpd[1839]: Listen normally on 4 lo [::1]:123 Nov 6 00:28:03.597526 ntpd[1839]: bind(21) AF_INET6 [fe80::484:17ff:fe03:5d11%2]:123 flags 0x811 failed: Cannot assign requested address Nov 6 00:28:03.597546 ntpd[1839]: unable to create socket on eth0 (5) for [fe80::484:17ff:fe03:5d11%2]:123 Nov 6 00:28:03.598678 kernel: ntpd[1839]: segfault at 24 ip 000056373a691aeb sp 00007ffdeff16b20 error 4 in ntpd[68aeb,56373a62f000+80000] likely on CPU 0 (core 0, socket 0) Nov 6 00:28:03.601926 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Nov 6 00:28:03.605862 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 6 00:28:03.651455 systemd-coredump[1958]: Process 1839 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Nov 6 00:28:03.658457 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Nov 6 00:28:03.666232 systemd[1]: Started systemd-coredump@0-1958-0.service - Process Core Dump (PID 1958/UID 0). Nov 6 00:28:03.727798 kernel: ACPI: button: Power Button [PWRF] Nov 6 00:28:03.734810 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Nov 6 00:28:03.751404 systemd-logind[1847]: New seat seat0. Nov 6 00:28:03.754245 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 00:28:03.770806 sshd_keygen[1885]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 00:28:03.867261 coreos-metadata[1925]: Nov 06 00:28:03.866 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 6 00:28:03.869577 coreos-metadata[1925]: Nov 06 00:28:03.869 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 6 00:28:03.873051 coreos-metadata[1925]: Nov 06 00:28:03.872 INFO Fetch successful Nov 6 00:28:03.873051 coreos-metadata[1925]: Nov 06 00:28:03.872 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 6 00:28:03.873839 locksmithd[1899]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 00:28:03.880839 coreos-metadata[1925]: Nov 06 00:28:03.876 INFO Fetch successful Nov 6 00:28:03.887012 unknown[1925]: wrote ssh authorized keys file for user: core Nov 6 00:28:03.923323 kernel: ACPI: button: Sleep Button [SLPF] Nov 6 00:28:03.918322 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 00:28:03.927035 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 00:28:03.962805 update-ssh-keys[2002]: Updated "/home/core/.ssh/authorized_keys" Nov 6 00:28:03.964316 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 6 00:28:03.975855 systemd[1]: Finished sshkeys.service. Nov 6 00:28:03.991825 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Nov 6 00:28:04.018926 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 6 00:28:04.030260 dbus-daemon[1832]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 6 00:28:04.038257 containerd[1886]: time="2025-11-06T00:28:04Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 6 00:28:04.039805 containerd[1886]: time="2025-11-06T00:28:04.038892499Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 6 00:28:04.047099 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 00:28:04.047410 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 00:28:04.054250 dbus-daemon[1832]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1920 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 6 00:28:04.058150 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 00:28:04.072551 containerd[1886]: time="2025-11-06T00:28:04.071968219Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.599µs" Nov 6 00:28:04.072551 containerd[1886]: time="2025-11-06T00:28:04.072017837Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 6 00:28:04.072551 containerd[1886]: time="2025-11-06T00:28:04.072044179Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 6 00:28:04.072551 containerd[1886]: time="2025-11-06T00:28:04.072236404Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 6 00:28:04.072551 containerd[1886]: time="2025-11-06T00:28:04.072268175Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 6 00:28:04.072551 containerd[1886]: time="2025-11-06T00:28:04.072302123Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:28:04.072551 containerd[1886]: time="2025-11-06T00:28:04.072374499Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:28:04.072551 containerd[1886]: time="2025-11-06T00:28:04.072389638Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:28:04.073945 containerd[1886]: time="2025-11-06T00:28:04.072640905Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:28:04.073945 containerd[1886]: time="2025-11-06T00:28:04.072662330Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:28:04.073945 containerd[1886]: time="2025-11-06T00:28:04.072680290Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:28:04.073945 containerd[1886]: time="2025-11-06T00:28:04.072693368Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 6 00:28:04.073945 containerd[1886]: time="2025-11-06T00:28:04.073845332Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 6 00:28:04.073132 systemd[1]: Starting polkit.service - Authorization Manager... Nov 6 00:28:04.074214 containerd[1886]: time="2025-11-06T00:28:04.074136182Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:28:04.074214 containerd[1886]: time="2025-11-06T00:28:04.074178849Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:28:04.074214 containerd[1886]: time="2025-11-06T00:28:04.074194899Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 6 00:28:04.074312 containerd[1886]: time="2025-11-06T00:28:04.074258023Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 6 00:28:04.075146 containerd[1886]: time="2025-11-06T00:28:04.074731052Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 6 00:28:04.075146 containerd[1886]: time="2025-11-06T00:28:04.074836481Z" level=info msg="metadata content store policy set" policy=shared Nov 6 00:28:04.082732 containerd[1886]: time="2025-11-06T00:28:04.080246653Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 6 00:28:04.082732 containerd[1886]: time="2025-11-06T00:28:04.080342393Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 6 00:28:04.082732 containerd[1886]: time="2025-11-06T00:28:04.080365077Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 6 00:28:04.082732 containerd[1886]: time="2025-11-06T00:28:04.080431065Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 6 00:28:04.082732 containerd[1886]: time="2025-11-06T00:28:04.080452267Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 6 00:28:04.082732 containerd[1886]: time="2025-11-06T00:28:04.080468568Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 6 00:28:04.082732 containerd[1886]: time="2025-11-06T00:28:04.080489601Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 6 00:28:04.082732 containerd[1886]: time="2025-11-06T00:28:04.080508066Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 6 00:28:04.082732 containerd[1886]: time="2025-11-06T00:28:04.080524151Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 6 00:28:04.082732 containerd[1886]: time="2025-11-06T00:28:04.080540265Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 6 00:28:04.082732 containerd[1886]: time="2025-11-06T00:28:04.080558053Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 6 00:28:04.082732 containerd[1886]: time="2025-11-06T00:28:04.080578089Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 6 00:28:04.082732 containerd[1886]: time="2025-11-06T00:28:04.080719069Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 6 00:28:04.082732 containerd[1886]: time="2025-11-06T00:28:04.080743052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 6 00:28:04.083278 containerd[1886]: time="2025-11-06T00:28:04.080766644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 6 00:28:04.083278 containerd[1886]: time="2025-11-06T00:28:04.081833562Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 6 00:28:04.083278 containerd[1886]: time="2025-11-06T00:28:04.081865038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 6 00:28:04.083278 containerd[1886]: time="2025-11-06T00:28:04.081880960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 6 00:28:04.083278 containerd[1886]: time="2025-11-06T00:28:04.081899122Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 6 00:28:04.083278 containerd[1886]: time="2025-11-06T00:28:04.081915694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 6 00:28:04.083278 containerd[1886]: time="2025-11-06T00:28:04.081938547Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 6 00:28:04.083278 containerd[1886]: time="2025-11-06T00:28:04.081958190Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 6 00:28:04.083278 containerd[1886]: time="2025-11-06T00:28:04.081984524Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 6 00:28:04.083278 containerd[1886]: time="2025-11-06T00:28:04.082077697Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 6 00:28:04.083278 containerd[1886]: time="2025-11-06T00:28:04.082108761Z" level=info msg="Start snapshots syncer" Nov 6 00:28:04.083278 containerd[1886]: time="2025-11-06T00:28:04.082160881Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 6 00:28:04.083693 containerd[1886]: time="2025-11-06T00:28:04.083149036Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 6 00:28:04.083693 containerd[1886]: time="2025-11-06T00:28:04.083223250Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 6 00:28:04.083927 containerd[1886]: time="2025-11-06T00:28:04.083903651Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 6 00:28:04.087460 containerd[1886]: time="2025-11-06T00:28:04.084095166Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 6 00:28:04.087460 containerd[1886]: time="2025-11-06T00:28:04.084132564Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 6 00:28:04.087460 containerd[1886]: time="2025-11-06T00:28:04.084149004Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 6 00:28:04.087460 containerd[1886]: time="2025-11-06T00:28:04.084167749Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 6 00:28:04.087460 containerd[1886]: time="2025-11-06T00:28:04.084190984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 6 00:28:04.087460 containerd[1886]: time="2025-11-06T00:28:04.084207951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 6 00:28:04.087460 containerd[1886]: time="2025-11-06T00:28:04.084229116Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 6 00:28:04.087460 containerd[1886]: time="2025-11-06T00:28:04.084262729Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 6 00:28:04.087460 containerd[1886]: time="2025-11-06T00:28:04.084278647Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 6 00:28:04.087460 containerd[1886]: time="2025-11-06T00:28:04.084296186Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 6 00:28:04.087460 containerd[1886]: time="2025-11-06T00:28:04.086828641Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:28:04.087460 containerd[1886]: time="2025-11-06T00:28:04.086924778Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:28:04.087460 containerd[1886]: time="2025-11-06T00:28:04.086943219Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:28:04.088023 containerd[1886]: time="2025-11-06T00:28:04.086958942Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:28:04.088023 containerd[1886]: time="2025-11-06T00:28:04.086971630Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 6 00:28:04.088023 containerd[1886]: time="2025-11-06T00:28:04.086987038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 6 00:28:04.088023 containerd[1886]: time="2025-11-06T00:28:04.087004138Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 6 00:28:04.088023 containerd[1886]: time="2025-11-06T00:28:04.087028972Z" level=info msg="runtime interface created" Nov 6 00:28:04.088023 containerd[1886]: time="2025-11-06T00:28:04.087036866Z" level=info msg="created NRI interface" Nov 6 00:28:04.088023 containerd[1886]: time="2025-11-06T00:28:04.087051603Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 6 00:28:04.088023 containerd[1886]: time="2025-11-06T00:28:04.087070603Z" level=info msg="Connect containerd service" Nov 6 00:28:04.088023 containerd[1886]: time="2025-11-06T00:28:04.087113611Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 00:28:04.090806 containerd[1886]: time="2025-11-06T00:28:04.089256784Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:28:04.107856 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 00:28:04.115887 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 00:28:04.123724 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 00:28:04.124703 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 00:28:04.246046 systemd-coredump[1959]: Process 1839 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1839: #0 0x000056373a691aeb n/a (ntpd + 0x68aeb) #1 0x000056373a63acdf n/a (ntpd + 0x11cdf) #2 0x000056373a63b575 n/a (ntpd + 0x12575) #3 0x000056373a636d8a n/a (ntpd + 0xdd8a) #4 0x000056373a6385d3 n/a (ntpd + 0xf5d3) #5 0x000056373a640fd1 n/a (ntpd + 0x17fd1) #6 0x000056373a631c2d n/a (ntpd + 0x8c2d) #7 0x00007ff3ec96b16c n/a (libc.so.6 + 0x2716c) #8 0x00007ff3ec96b229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000056373a631c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Nov 6 00:28:04.273343 systemd[1]: systemd-coredump@0-1958-0.service: Deactivated successfully. Nov 6 00:28:04.288419 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Nov 6 00:28:04.288618 systemd[1]: ntpd.service: Failed with result 'core-dump'. Nov 6 00:28:04.411308 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 6 00:28:04.416551 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Nov 6 00:28:04.420505 systemd[1]: Started ntpd.service - Network Time Service. Nov 6 00:28:04.425144 coreos-metadata[1831]: Nov 06 00:28:04.425 INFO Putting http://169.254.169.254/latest/api/token: Attempt #2 Nov 6 00:28:04.425982 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 00:28:04.434747 coreos-metadata[1831]: Nov 06 00:28:04.433 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 6 00:28:04.437814 coreos-metadata[1831]: Nov 06 00:28:04.437 INFO Fetch successful Nov 6 00:28:04.437814 coreos-metadata[1831]: Nov 06 00:28:04.437 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 6 00:28:04.440049 coreos-metadata[1831]: Nov 06 00:28:04.439 INFO Fetch successful Nov 6 00:28:04.440049 coreos-metadata[1831]: Nov 06 00:28:04.439 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 6 00:28:04.445153 coreos-metadata[1831]: Nov 06 00:28:04.444 INFO Fetch successful Nov 6 00:28:04.445153 coreos-metadata[1831]: Nov 06 00:28:04.444 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 6 00:28:04.453658 coreos-metadata[1831]: Nov 06 00:28:04.449 INFO Fetch successful Nov 6 00:28:04.453658 coreos-metadata[1831]: Nov 06 00:28:04.449 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 6 00:28:04.454492 coreos-metadata[1831]: Nov 06 00:28:04.454 INFO Fetch failed with 404: resource not found Nov 6 00:28:04.454492 coreos-metadata[1831]: Nov 06 00:28:04.454 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 6 00:28:04.456590 coreos-metadata[1831]: Nov 06 00:28:04.456 INFO Fetch successful Nov 6 00:28:04.456590 coreos-metadata[1831]: Nov 06 00:28:04.456 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 6 00:28:04.459361 coreos-metadata[1831]: Nov 06 00:28:04.459 INFO Fetch successful Nov 6 00:28:04.460596 coreos-metadata[1831]: Nov 06 00:28:04.460 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 6 00:28:04.466773 coreos-metadata[1831]: Nov 06 00:28:04.466 INFO Fetch successful Nov 6 00:28:04.466773 coreos-metadata[1831]: Nov 06 00:28:04.466 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 6 00:28:04.469820 coreos-metadata[1831]: Nov 06 00:28:04.469 INFO Fetch successful Nov 6 00:28:04.469820 coreos-metadata[1831]: Nov 06 00:28:04.469 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 6 00:28:04.471202 coreos-metadata[1831]: Nov 06 00:28:04.470 INFO Fetch successful Nov 6 00:28:04.520204 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:28:04.520958 containerd[1886]: time="2025-11-06T00:28:04.520915778Z" level=info msg="Start subscribing containerd event" Nov 6 00:28:04.521068 containerd[1886]: time="2025-11-06T00:28:04.520990017Z" level=info msg="Start recovering state" Nov 6 00:28:04.522309 containerd[1886]: time="2025-11-06T00:28:04.521578482Z" level=info msg="Start event monitor" Nov 6 00:28:04.522309 containerd[1886]: time="2025-11-06T00:28:04.521630219Z" level=info msg="Start cni network conf syncer for default" Nov 6 00:28:04.522309 containerd[1886]: time="2025-11-06T00:28:04.521642280Z" level=info msg="Start streaming server" Nov 6 00:28:04.522309 containerd[1886]: time="2025-11-06T00:28:04.522210864Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 6 00:28:04.522309 containerd[1886]: time="2025-11-06T00:28:04.522230451Z" level=info msg="runtime interface starting up..." Nov 6 00:28:04.522309 containerd[1886]: time="2025-11-06T00:28:04.522240463Z" level=info msg="starting plugins..." Nov 6 00:28:04.522309 containerd[1886]: time="2025-11-06T00:28:04.522280798Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 6 00:28:04.527734 containerd[1886]: time="2025-11-06T00:28:04.524598599Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 00:28:04.527734 containerd[1886]: time="2025-11-06T00:28:04.525007926Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 00:28:04.527734 containerd[1886]: time="2025-11-06T00:28:04.525191634Z" level=info msg="containerd successfully booted in 0.487639s" Nov 6 00:28:04.525902 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 00:28:04.545606 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 6 00:28:04.548365 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 00:28:04.553112 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 00:28:04.586995 systemd-logind[1847]: Watching system buttons on /dev/input/event2 (Power Button) Nov 6 00:28:04.599752 systemd-logind[1847]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 00:28:04.610421 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:28:04.610815 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:28:04.618408 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:28:04.643246 systemd-networkd[1803]: eth0: Gained IPv6LL Nov 6 00:28:04.649282 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 00:28:04.650124 systemd-logind[1847]: Watching system buttons on /dev/input/event3 (Sleep Button) Nov 6 00:28:04.652708 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 00:28:04.657576 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 6 00:28:04.659906 ntpd[2074]: ntpd 4.2.8p18@1.4062-o Wed Nov 5 21:31:10 UTC 2025 (1): Starting Nov 6 00:28:04.660827 ntpd[2074]: 6 Nov 00:28:04 ntpd[2074]: ntpd 4.2.8p18@1.4062-o Wed Nov 5 21:31:10 UTC 2025 (1): Starting Nov 6 00:28:04.661086 ntpd[2074]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 6 00:28:04.661812 ntpd[2074]: 6 Nov 00:28:04 ntpd[2074]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 6 00:28:04.661812 ntpd[2074]: 6 Nov 00:28:04 ntpd[2074]: ---------------------------------------------------- Nov 6 00:28:04.661812 ntpd[2074]: 6 Nov 00:28:04 ntpd[2074]: ntp-4 is maintained by Network Time Foundation, Nov 6 00:28:04.661812 ntpd[2074]: 6 Nov 00:28:04 ntpd[2074]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 6 00:28:04.661812 ntpd[2074]: 6 Nov 00:28:04 ntpd[2074]: corporation. Support and training for ntp-4 are Nov 6 00:28:04.661812 ntpd[2074]: 6 Nov 00:28:04 ntpd[2074]: available at https://www.nwtime.org/support Nov 6 00:28:04.661812 ntpd[2074]: 6 Nov 00:28:04 ntpd[2074]: ---------------------------------------------------- Nov 6 00:28:04.661292 ntpd[2074]: ---------------------------------------------------- Nov 6 00:28:04.661302 ntpd[2074]: ntp-4 is maintained by Network Time Foundation, Nov 6 00:28:04.661312 ntpd[2074]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 6 00:28:04.661322 ntpd[2074]: corporation. Support and training for ntp-4 are Nov 6 00:28:04.661330 ntpd[2074]: available at https://www.nwtime.org/support Nov 6 00:28:04.661340 ntpd[2074]: ---------------------------------------------------- Nov 6 00:28:04.664193 ntpd[2074]: proto: precision = 0.071 usec (-24) Nov 6 00:28:04.665910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:28:04.668049 ntpd[2074]: 6 Nov 00:28:04 ntpd[2074]: proto: precision = 0.071 usec (-24) Nov 6 00:28:04.668049 ntpd[2074]: 6 Nov 00:28:04 ntpd[2074]: basedate set to 2025-10-24 Nov 6 00:28:04.668049 ntpd[2074]: 6 Nov 00:28:04 ntpd[2074]: gps base set to 2025-10-26 (week 2390) Nov 6 00:28:04.668049 ntpd[2074]: 6 Nov 00:28:04 ntpd[2074]: Listen and drop on 0 v6wildcard [::]:123 Nov 6 00:28:04.668049 ntpd[2074]: 6 Nov 00:28:04 ntpd[2074]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 6 00:28:04.664471 ntpd[2074]: basedate set to 2025-10-24 Nov 6 00:28:04.664485 ntpd[2074]: gps base set to 2025-10-26 (week 2390) Nov 6 00:28:04.664581 ntpd[2074]: Listen and drop on 0 v6wildcard [::]:123 Nov 6 00:28:04.664608 ntpd[2074]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 6 00:28:04.668547 ntpd[2074]: Listen normally on 2 lo 127.0.0.1:123 Nov 6 00:28:04.669019 ntpd[2074]: 6 Nov 00:28:04 ntpd[2074]: Listen normally on 2 lo 127.0.0.1:123 Nov 6 00:28:04.669019 ntpd[2074]: 6 Nov 00:28:04 ntpd[2074]: Listen normally on 3 eth0 172.31.28.191:123 Nov 6 00:28:04.669019 ntpd[2074]: 6 Nov 00:28:04 ntpd[2074]: Listen normally on 4 lo [::1]:123 Nov 6 00:28:04.669019 ntpd[2074]: 6 Nov 00:28:04 ntpd[2074]: Listen normally on 5 eth0 [fe80::484:17ff:fe03:5d11%2]:123 Nov 6 00:28:04.669019 ntpd[2074]: 6 Nov 00:28:04 ntpd[2074]: Listening on routing socket on fd #22 for interface updates Nov 6 00:28:04.668585 ntpd[2074]: Listen normally on 3 eth0 172.31.28.191:123 Nov 6 00:28:04.668619 ntpd[2074]: Listen normally on 4 lo [::1]:123 Nov 6 00:28:04.668650 ntpd[2074]: Listen normally on 5 eth0 [fe80::484:17ff:fe03:5d11%2]:123 Nov 6 00:28:04.668679 ntpd[2074]: Listening on routing socket on fd #22 for interface updates Nov 6 00:28:04.671223 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 00:28:04.680578 ntpd[2074]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 6 00:28:04.681541 ntpd[2074]: 6 Nov 00:28:04 ntpd[2074]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 6 00:28:04.681541 ntpd[2074]: 6 Nov 00:28:04 ntpd[2074]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 6 00:28:04.680623 ntpd[2074]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 6 00:28:04.779300 polkitd[2031]: Started polkitd version 126 Nov 6 00:28:04.808858 polkitd[2031]: Loading rules from directory /etc/polkit-1/rules.d Nov 6 00:28:04.809723 polkitd[2031]: Loading rules from directory /run/polkit-1/rules.d Nov 6 00:28:04.810887 polkitd[2031]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 6 00:28:04.812121 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 00:28:04.812446 polkitd[2031]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 6 00:28:04.812494 polkitd[2031]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 6 00:28:04.812546 polkitd[2031]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 6 00:28:04.819203 polkitd[2031]: Finished loading, compiling and executing 2 rules Nov 6 00:28:04.819964 systemd[1]: Started polkit.service - Authorization Manager. Nov 6 00:28:04.829148 dbus-daemon[1832]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 6 00:28:04.830618 polkitd[2031]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 6 00:28:04.877116 systemd-resolved[1751]: System hostname changed to 'ip-172-31-28-191'. Nov 6 00:28:04.877274 systemd-hostnamed[1920]: Hostname set to (transient) Nov 6 00:28:04.879801 amazon-ssm-agent[2105]: Initializing new seelog logger Nov 6 00:28:04.880128 amazon-ssm-agent[2105]: New Seelog Logger Creation Complete Nov 6 00:28:04.880268 amazon-ssm-agent[2105]: 2025/11/06 00:28:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 6 00:28:04.880328 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 6 00:28:04.880979 amazon-ssm-agent[2105]: 2025/11/06 00:28:04 processing appconfig overrides Nov 6 00:28:04.881635 amazon-ssm-agent[2105]: 2025/11/06 00:28:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 6 00:28:04.883812 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 6 00:28:04.883812 amazon-ssm-agent[2105]: 2025/11/06 00:28:04 processing appconfig overrides Nov 6 00:28:04.883812 amazon-ssm-agent[2105]: 2025/11/06 00:28:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 6 00:28:04.883812 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 6 00:28:04.883812 amazon-ssm-agent[2105]: 2025/11/06 00:28:04 processing appconfig overrides Nov 6 00:28:04.884086 amazon-ssm-agent[2105]: 2025-11-06 00:28:04.8815 INFO Proxy environment variables: Nov 6 00:28:04.896395 amazon-ssm-agent[2105]: 2025/11/06 00:28:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 6 00:28:04.896395 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 6 00:28:04.896395 amazon-ssm-agent[2105]: 2025/11/06 00:28:04 processing appconfig overrides Nov 6 00:28:04.915682 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:28:04.968385 tar[1857]: linux-amd64/README.md Nov 6 00:28:04.984115 amazon-ssm-agent[2105]: 2025-11-06 00:28:04.8815 INFO https_proxy: Nov 6 00:28:04.993133 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 00:28:05.082415 amazon-ssm-agent[2105]: 2025-11-06 00:28:04.8815 INFO http_proxy: Nov 6 00:28:05.165597 amazon-ssm-agent[2105]: 2025/11/06 00:28:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 6 00:28:05.165597 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 6 00:28:05.165926 amazon-ssm-agent[2105]: 2025/11/06 00:28:05 processing appconfig overrides Nov 6 00:28:05.180802 amazon-ssm-agent[2105]: 2025-11-06 00:28:04.8816 INFO no_proxy: Nov 6 00:28:05.205243 amazon-ssm-agent[2105]: 2025-11-06 00:28:04.8829 INFO Checking if agent identity type OnPrem can be assumed Nov 6 00:28:05.205243 amazon-ssm-agent[2105]: 2025-11-06 00:28:04.8831 INFO Checking if agent identity type EC2 can be assumed Nov 6 00:28:05.205243 amazon-ssm-agent[2105]: 2025-11-06 00:28:04.9767 INFO Agent will take identity from EC2 Nov 6 00:28:05.205243 amazon-ssm-agent[2105]: 2025-11-06 00:28:04.9783 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Nov 6 00:28:05.205243 amazon-ssm-agent[2105]: 2025-11-06 00:28:04.9784 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Nov 6 00:28:05.205589 amazon-ssm-agent[2105]: 2025-11-06 00:28:04.9784 INFO [amazon-ssm-agent] Starting Core Agent Nov 6 00:28:05.205589 amazon-ssm-agent[2105]: 2025-11-06 00:28:04.9784 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Nov 6 00:28:05.205589 amazon-ssm-agent[2105]: 2025-11-06 00:28:04.9784 INFO [Registrar] Starting registrar module Nov 6 00:28:05.205589 amazon-ssm-agent[2105]: 2025-11-06 00:28:04.9798 INFO [EC2Identity] Checking disk for registration info Nov 6 00:28:05.205589 amazon-ssm-agent[2105]: 2025-11-06 00:28:04.9799 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Nov 6 00:28:05.205589 amazon-ssm-agent[2105]: 2025-11-06 00:28:04.9799 INFO [EC2Identity] Generating registration keypair Nov 6 00:28:05.205589 amazon-ssm-agent[2105]: 2025-11-06 00:28:05.1170 INFO [EC2Identity] Checking write access before registering Nov 6 00:28:05.205589 amazon-ssm-agent[2105]: 2025-11-06 00:28:05.1173 INFO [EC2Identity] Registering EC2 instance with Systems Manager Nov 6 00:28:05.205589 amazon-ssm-agent[2105]: 2025-11-06 00:28:05.1652 INFO [EC2Identity] EC2 registration was successful. Nov 6 00:28:05.205589 amazon-ssm-agent[2105]: 2025-11-06 00:28:05.1652 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Nov 6 00:28:05.205589 amazon-ssm-agent[2105]: 2025-11-06 00:28:05.1653 INFO [CredentialRefresher] credentialRefresher has started Nov 6 00:28:05.205589 amazon-ssm-agent[2105]: 2025-11-06 00:28:05.1653 INFO [CredentialRefresher] Starting credentials refresher loop Nov 6 00:28:05.205589 amazon-ssm-agent[2105]: 2025-11-06 00:28:05.2049 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 6 00:28:05.205589 amazon-ssm-agent[2105]: 2025-11-06 00:28:05.2051 INFO [CredentialRefresher] Credentials ready Nov 6 00:28:05.248193 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 00:28:05.249958 systemd[1]: Started sshd@0-172.31.28.191:22-147.75.109.163:48560.service - OpenSSH per-connection server daemon (147.75.109.163:48560). Nov 6 00:28:05.278862 amazon-ssm-agent[2105]: 2025-11-06 00:28:05.2053 INFO [CredentialRefresher] Next credential rotation will be in 29.999993799066665 minutes Nov 6 00:28:05.471901 sshd[2143]: Accepted publickey for core from 147.75.109.163 port 48560 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:28:05.473429 sshd-session[2143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:05.483078 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 00:28:05.486941 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 00:28:05.500978 systemd-logind[1847]: New session 1 of user core. Nov 6 00:28:05.518460 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 00:28:05.525067 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 00:28:05.544114 (systemd)[2148]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 00:28:05.548554 systemd-logind[1847]: New session c1 of user core. Nov 6 00:28:05.738494 systemd[2148]: Queued start job for default target default.target. Nov 6 00:28:05.749268 systemd[2148]: Created slice app.slice - User Application Slice. Nov 6 00:28:05.749314 systemd[2148]: Reached target paths.target - Paths. Nov 6 00:28:05.749473 systemd[2148]: Reached target timers.target - Timers. Nov 6 00:28:05.750879 systemd[2148]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 00:28:05.768977 systemd[2148]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 00:28:05.769837 systemd[2148]: Reached target sockets.target - Sockets. Nov 6 00:28:05.769889 systemd[2148]: Reached target basic.target - Basic System. Nov 6 00:28:05.769926 systemd[2148]: Reached target default.target - Main User Target. Nov 6 00:28:05.769958 systemd[2148]: Startup finished in 211ms. Nov 6 00:28:05.770421 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 00:28:05.780030 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 00:28:05.926799 systemd[1]: Started sshd@1-172.31.28.191:22-147.75.109.163:48562.service - OpenSSH per-connection server daemon (147.75.109.163:48562). Nov 6 00:28:06.108196 sshd[2159]: Accepted publickey for core from 147.75.109.163 port 48562 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:28:06.109693 sshd-session[2159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:06.115676 systemd-logind[1847]: New session 2 of user core. Nov 6 00:28:06.118993 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 00:28:06.228262 amazon-ssm-agent[2105]: 2025-11-06 00:28:06.2281 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 6 00:28:06.247695 sshd[2162]: Connection closed by 147.75.109.163 port 48562 Nov 6 00:28:06.248254 sshd-session[2159]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:06.252198 systemd[1]: sshd@1-172.31.28.191:22-147.75.109.163:48562.service: Deactivated successfully. Nov 6 00:28:06.253990 systemd[1]: session-2.scope: Deactivated successfully. Nov 6 00:28:06.256763 systemd-logind[1847]: Session 2 logged out. Waiting for processes to exit. Nov 6 00:28:06.257962 systemd-logind[1847]: Removed session 2. Nov 6 00:28:06.279598 systemd[1]: Started sshd@2-172.31.28.191:22-147.75.109.163:48578.service - OpenSSH per-connection server daemon (147.75.109.163:48578). Nov 6 00:28:06.329042 amazon-ssm-agent[2105]: 2025-11-06 00:28:06.2301 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2166) started Nov 6 00:28:06.430240 amazon-ssm-agent[2105]: 2025-11-06 00:28:06.2301 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 6 00:28:06.472932 sshd[2171]: Accepted publickey for core from 147.75.109.163 port 48578 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:28:06.474277 sshd-session[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:06.480406 systemd-logind[1847]: New session 3 of user core. Nov 6 00:28:06.487044 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 00:28:06.611822 sshd[2185]: Connection closed by 147.75.109.163 port 48578 Nov 6 00:28:06.612304 sshd-session[2171]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:06.616859 systemd[1]: sshd@2-172.31.28.191:22-147.75.109.163:48578.service: Deactivated successfully. Nov 6 00:28:06.620251 systemd[1]: session-3.scope: Deactivated successfully. Nov 6 00:28:06.621400 systemd-logind[1847]: Session 3 logged out. Waiting for processes to exit. Nov 6 00:28:06.623453 systemd-logind[1847]: Removed session 3. Nov 6 00:28:06.855847 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:28:06.857551 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 00:28:06.859914 systemd[1]: Startup finished in 2.549s (kernel) + 6.421s (initrd) + 8.859s (userspace) = 17.831s. Nov 6 00:28:06.868835 (kubelet)[2195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:28:07.940488 kubelet[2195]: E1106 00:28:07.940408 2195 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:28:07.943137 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:28:07.943285 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:28:07.943760 systemd[1]: kubelet.service: Consumed 1.083s CPU time, 268.2M memory peak. Nov 6 00:28:13.441800 systemd-resolved[1751]: Clock change detected. Flushing caches. Nov 6 00:28:18.428660 systemd[1]: Started sshd@3-172.31.28.191:22-147.75.109.163:49076.service - OpenSSH per-connection server daemon (147.75.109.163:49076). Nov 6 00:28:18.601779 sshd[2208]: Accepted publickey for core from 147.75.109.163 port 49076 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:28:18.603419 sshd-session[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:18.608849 systemd-logind[1847]: New session 4 of user core. Nov 6 00:28:18.614776 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 00:28:18.735389 sshd[2211]: Connection closed by 147.75.109.163 port 49076 Nov 6 00:28:18.735974 sshd-session[2208]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:18.740216 systemd[1]: sshd@3-172.31.28.191:22-147.75.109.163:49076.service: Deactivated successfully. Nov 6 00:28:18.741890 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 00:28:18.742713 systemd-logind[1847]: Session 4 logged out. Waiting for processes to exit. Nov 6 00:28:18.744164 systemd-logind[1847]: Removed session 4. Nov 6 00:28:18.768731 systemd[1]: Started sshd@4-172.31.28.191:22-147.75.109.163:49088.service - OpenSSH per-connection server daemon (147.75.109.163:49088). Nov 6 00:28:18.945075 sshd[2217]: Accepted publickey for core from 147.75.109.163 port 49088 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:28:18.946439 sshd-session[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:18.952744 systemd-logind[1847]: New session 5 of user core. Nov 6 00:28:18.961915 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 00:28:19.076684 sshd[2220]: Connection closed by 147.75.109.163 port 49088 Nov 6 00:28:19.077514 sshd-session[2217]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:19.081742 systemd[1]: sshd@4-172.31.28.191:22-147.75.109.163:49088.service: Deactivated successfully. Nov 6 00:28:19.083535 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 00:28:19.084312 systemd-logind[1847]: Session 5 logged out. Waiting for processes to exit. Nov 6 00:28:19.085756 systemd-logind[1847]: Removed session 5. Nov 6 00:28:19.114253 systemd[1]: Started sshd@5-172.31.28.191:22-147.75.109.163:49092.service - OpenSSH per-connection server daemon (147.75.109.163:49092). Nov 6 00:28:19.292320 sshd[2226]: Accepted publickey for core from 147.75.109.163 port 49092 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:28:19.293642 sshd-session[2226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:19.299922 systemd-logind[1847]: New session 6 of user core. Nov 6 00:28:19.306853 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 00:28:19.430274 sshd[2229]: Connection closed by 147.75.109.163 port 49092 Nov 6 00:28:19.431306 sshd-session[2226]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:19.434977 systemd[1]: sshd@5-172.31.28.191:22-147.75.109.163:49092.service: Deactivated successfully. Nov 6 00:28:19.436937 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 00:28:19.437921 systemd-logind[1847]: Session 6 logged out. Waiting for processes to exit. Nov 6 00:28:19.439638 systemd-logind[1847]: Removed session 6. Nov 6 00:28:19.463797 systemd[1]: Started sshd@6-172.31.28.191:22-147.75.109.163:49106.service - OpenSSH per-connection server daemon (147.75.109.163:49106). Nov 6 00:28:19.635694 sshd[2235]: Accepted publickey for core from 147.75.109.163 port 49106 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:28:19.636961 sshd-session[2235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:19.642523 systemd-logind[1847]: New session 7 of user core. Nov 6 00:28:19.649870 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 00:28:19.728044 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 00:28:19.730005 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:28:19.767783 sudo[2242]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 00:28:19.768057 sudo[2242]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:28:19.781920 sudo[2242]: pam_unix(sudo:session): session closed for user root Nov 6 00:28:19.805465 sshd[2238]: Connection closed by 147.75.109.163 port 49106 Nov 6 00:28:19.806927 sshd-session[2235]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:19.812330 systemd[1]: sshd@6-172.31.28.191:22-147.75.109.163:49106.service: Deactivated successfully. Nov 6 00:28:19.816371 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 00:28:19.819468 systemd-logind[1847]: Session 7 logged out. Waiting for processes to exit. Nov 6 00:28:19.821088 systemd-logind[1847]: Removed session 7. Nov 6 00:28:19.838097 systemd[1]: Started sshd@7-172.31.28.191:22-147.75.109.163:49120.service - OpenSSH per-connection server daemon (147.75.109.163:49120). Nov 6 00:28:19.962087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:28:19.971248 (kubelet)[2256]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:28:19.999703 sshd[2248]: Accepted publickey for core from 147.75.109.163 port 49120 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:28:20.001356 sshd-session[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:20.007337 systemd-logind[1847]: New session 8 of user core. Nov 6 00:28:20.013062 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 00:28:20.029319 kubelet[2256]: E1106 00:28:20.029281 2256 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:28:20.033958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:28:20.034158 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:28:20.034692 systemd[1]: kubelet.service: Consumed 183ms CPU time, 111M memory peak. Nov 6 00:28:20.110791 sudo[2265]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 00:28:20.111252 sudo[2265]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:28:20.116359 sudo[2265]: pam_unix(sudo:session): session closed for user root Nov 6 00:28:20.122350 sudo[2264]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 00:28:20.122735 sudo[2264]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:28:20.133628 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:28:20.182304 augenrules[2287]: No rules Nov 6 00:28:20.186222 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:28:20.187144 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:28:20.189275 sudo[2264]: pam_unix(sudo:session): session closed for user root Nov 6 00:28:20.211947 sshd[2262]: Connection closed by 147.75.109.163 port 49120 Nov 6 00:28:20.212493 sshd-session[2248]: pam_unix(sshd:session): session closed for user core Nov 6 00:28:20.215907 systemd[1]: sshd@7-172.31.28.191:22-147.75.109.163:49120.service: Deactivated successfully. Nov 6 00:28:20.217777 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 00:28:20.218989 systemd-logind[1847]: Session 8 logged out. Waiting for processes to exit. Nov 6 00:28:20.220849 systemd-logind[1847]: Removed session 8. Nov 6 00:28:20.245153 systemd[1]: Started sshd@8-172.31.28.191:22-147.75.109.163:57638.service - OpenSSH per-connection server daemon (147.75.109.163:57638). Nov 6 00:28:20.409640 sshd[2296]: Accepted publickey for core from 147.75.109.163 port 57638 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:28:20.410984 sshd-session[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:28:20.417560 systemd-logind[1847]: New session 9 of user core. Nov 6 00:28:20.424538 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 00:28:20.518686 sudo[2300]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 00:28:20.519066 sudo[2300]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:28:21.112276 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 00:28:21.123240 (dockerd)[2318]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 00:28:21.534484 dockerd[2318]: time="2025-11-06T00:28:21.534421076Z" level=info msg="Starting up" Nov 6 00:28:21.535781 dockerd[2318]: time="2025-11-06T00:28:21.535751543Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 6 00:28:21.547433 dockerd[2318]: time="2025-11-06T00:28:21.547361759Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 6 00:28:21.610528 dockerd[2318]: time="2025-11-06T00:28:21.610484981Z" level=info msg="Loading containers: start." Nov 6 00:28:21.620727 kernel: Initializing XFRM netlink socket Nov 6 00:28:21.896543 (udev-worker)[2339]: Network interface NamePolicy= disabled on kernel command line. Nov 6 00:28:21.944212 systemd-networkd[1803]: docker0: Link UP Nov 6 00:28:21.948533 dockerd[2318]: time="2025-11-06T00:28:21.948474791Z" level=info msg="Loading containers: done." Nov 6 00:28:21.963936 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4259242590-merged.mount: Deactivated successfully. Nov 6 00:28:21.967846 dockerd[2318]: time="2025-11-06T00:28:21.967800203Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 00:28:21.967987 dockerd[2318]: time="2025-11-06T00:28:21.967892796Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 6 00:28:21.967987 dockerd[2318]: time="2025-11-06T00:28:21.967974418Z" level=info msg="Initializing buildkit" Nov 6 00:28:21.996651 dockerd[2318]: time="2025-11-06T00:28:21.996603306Z" level=info msg="Completed buildkit initialization" Nov 6 00:28:22.001619 dockerd[2318]: time="2025-11-06T00:28:22.001206115Z" level=info msg="Daemon has completed initialization" Nov 6 00:28:22.001619 dockerd[2318]: time="2025-11-06T00:28:22.001286727Z" level=info msg="API listen on /run/docker.sock" Nov 6 00:28:22.001920 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 00:28:23.254499 containerd[1886]: time="2025-11-06T00:28:23.254459633Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 6 00:28:23.780051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount14060570.mount: Deactivated successfully. Nov 6 00:28:25.878911 containerd[1886]: time="2025-11-06T00:28:25.878839095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:28:25.880088 containerd[1886]: time="2025-11-06T00:28:25.879898344Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 6 00:28:25.881041 containerd[1886]: time="2025-11-06T00:28:25.881011101Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:28:25.884301 containerd[1886]: time="2025-11-06T00:28:25.884266771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:28:25.885297 containerd[1886]: time="2025-11-06T00:28:25.885040892Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.630543787s" Nov 6 00:28:25.885297 containerd[1886]: time="2025-11-06T00:28:25.885078445Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 6 00:28:25.885737 containerd[1886]: time="2025-11-06T00:28:25.885704769Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 6 00:28:28.034473 containerd[1886]: time="2025-11-06T00:28:28.034417353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:28:28.035505 containerd[1886]: time="2025-11-06T00:28:28.035466242Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 6 00:28:28.036610 containerd[1886]: time="2025-11-06T00:28:28.036339877Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:28:28.039410 containerd[1886]: time="2025-11-06T00:28:28.039361891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:28:28.040796 containerd[1886]: time="2025-11-06T00:28:28.040197163Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 2.154464038s" Nov 6 00:28:28.040796 containerd[1886]: time="2025-11-06T00:28:28.040230633Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 6 00:28:28.041092 containerd[1886]: time="2025-11-06T00:28:28.041046828Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 6 00:28:29.732095 containerd[1886]: time="2025-11-06T00:28:29.732043130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:28:29.733093 containerd[1886]: time="2025-11-06T00:28:29.732851565Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 6 00:28:29.734163 containerd[1886]: time="2025-11-06T00:28:29.734130198Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:28:29.736704 containerd[1886]: time="2025-11-06T00:28:29.736673323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:28:29.737420 containerd[1886]: time="2025-11-06T00:28:29.737384636Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.696284238s" Nov 6 00:28:29.737420 containerd[1886]: time="2025-11-06T00:28:29.737421977Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 6 00:28:29.737930 containerd[1886]: time="2025-11-06T00:28:29.737896155Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 6 00:28:30.228002 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 6 00:28:30.229960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:28:30.562772 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:28:30.575128 (kubelet)[2606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:28:30.642686 kubelet[2606]: E1106 00:28:30.642599 2606 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:28:30.646172 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:28:30.646366 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:28:30.647212 systemd[1]: kubelet.service: Consumed 212ms CPU time, 108.4M memory peak. Nov 6 00:28:30.871361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount918916218.mount: Deactivated successfully. Nov 6 00:28:31.501295 containerd[1886]: time="2025-11-06T00:28:31.501240756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:28:31.502249 containerd[1886]: time="2025-11-06T00:28:31.502102479Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 6 00:28:31.503651 containerd[1886]: time="2025-11-06T00:28:31.503614436Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:28:31.506013 containerd[1886]: time="2025-11-06T00:28:31.505977033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:28:31.506839 containerd[1886]: time="2025-11-06T00:28:31.506632783Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.768702304s" Nov 6 00:28:31.506839 containerd[1886]: time="2025-11-06T00:28:31.506664988Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 6 00:28:31.507325 containerd[1886]: time="2025-11-06T00:28:31.507246689Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 6 00:28:31.977500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3706995214.mount: Deactivated successfully. Nov 6 00:28:33.446011 containerd[1886]: time="2025-11-06T00:28:33.445955513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:28:33.447198 containerd[1886]: time="2025-11-06T00:28:33.447138559Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 6 00:28:33.449677 containerd[1886]: time="2025-11-06T00:28:33.449000085Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:28:33.452784 containerd[1886]: time="2025-11-06T00:28:33.452744045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:28:33.453503 containerd[1886]: time="2025-11-06T00:28:33.453470043Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.946193449s" Nov 6 00:28:33.453503 containerd[1886]: time="2025-11-06T00:28:33.453505356Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 6 00:28:33.454513 containerd[1886]: time="2025-11-06T00:28:33.454483224Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 6 00:28:33.936145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3841793040.mount: Deactivated successfully. Nov 6 00:28:33.942220 containerd[1886]: time="2025-11-06T00:28:33.942150596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:28:33.943282 containerd[1886]: time="2025-11-06T00:28:33.943063642Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 6 00:28:33.944254 containerd[1886]: time="2025-11-06T00:28:33.944226204Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:28:33.946303 containerd[1886]: time="2025-11-06T00:28:33.946261090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:28:33.947321 containerd[1886]: time="2025-11-06T00:28:33.946798097Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 492.284721ms" Nov 6 00:28:33.947321 containerd[1886]: time="2025-11-06T00:28:33.946830563Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 6 00:28:33.947610 containerd[1886]: time="2025-11-06T00:28:33.947539942Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 6 00:28:34.400838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount256692980.mount: Deactivated successfully. Nov 6 00:28:36.668215 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 6 00:28:37.068812 containerd[1886]: time="2025-11-06T00:28:37.068735131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:28:37.069939 containerd[1886]: time="2025-11-06T00:28:37.069828781Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 6 00:28:37.071175 containerd[1886]: time="2025-11-06T00:28:37.071136607Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:28:37.074648 containerd[1886]: time="2025-11-06T00:28:37.074148442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:28:37.075773 containerd[1886]: time="2025-11-06T00:28:37.075730201Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.128151725s" Nov 6 00:28:37.075887 containerd[1886]: time="2025-11-06T00:28:37.075778888Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 6 00:28:40.727926 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 6 00:28:40.731838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:28:41.112751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:28:41.123052 (kubelet)[2762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:28:41.191718 kubelet[2762]: E1106 00:28:41.191671 2762 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:28:41.197428 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:28:41.197809 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:28:41.198324 systemd[1]: kubelet.service: Consumed 212ms CPU time, 109.7M memory peak. Nov 6 00:28:41.609061 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:28:41.609530 systemd[1]: kubelet.service: Consumed 212ms CPU time, 109.7M memory peak. Nov 6 00:28:41.612325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:28:41.647358 systemd[1]: Reload requested from client PID 2776 ('systemctl') (unit session-9.scope)... Nov 6 00:28:41.647378 systemd[1]: Reloading... Nov 6 00:28:41.750605 zram_generator::config[2816]: No configuration found. Nov 6 00:28:42.052795 systemd[1]: Reloading finished in 404 ms. Nov 6 00:28:42.097649 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 6 00:28:42.097755 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 6 00:28:42.098198 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:28:42.098264 systemd[1]: kubelet.service: Consumed 125ms CPU time, 91.5M memory peak. Nov 6 00:28:42.100264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:28:42.601158 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:28:42.612147 (kubelet)[2880]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:28:42.692519 kubelet[2880]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:28:42.692519 kubelet[2880]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:28:42.692519 kubelet[2880]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:28:42.710613 kubelet[2880]: I1106 00:28:42.710286 2880 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:28:42.936410 kubelet[2880]: I1106 00:28:42.936287 2880 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 6 00:28:42.936410 kubelet[2880]: I1106 00:28:42.936318 2880 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:28:42.936537 kubelet[2880]: I1106 00:28:42.936533 2880 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:28:42.980105 kubelet[2880]: I1106 00:28:42.980059 2880 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:28:42.984638 kubelet[2880]: E1106 00:28:42.984568 2880 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.191:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.191:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 00:28:43.019269 kubelet[2880]: I1106 00:28:43.019216 2880 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:28:43.033419 kubelet[2880]: I1106 00:28:43.033366 2880 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 00:28:43.037922 kubelet[2880]: I1106 00:28:43.037840 2880 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:28:43.042184 kubelet[2880]: I1106 00:28:43.037905 2880 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-191","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:28:43.043754 kubelet[2880]: I1106 00:28:43.043711 2880 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:28:43.043754 kubelet[2880]: I1106 00:28:43.043753 2880 container_manager_linux.go:303] "Creating device plugin manager" Nov 6 00:28:43.044999 kubelet[2880]: I1106 00:28:43.044966 2880 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:28:43.049163 kubelet[2880]: I1106 00:28:43.048886 2880 kubelet.go:480] "Attempting to sync node with API server" Nov 6 00:28:43.049163 kubelet[2880]: I1106 00:28:43.048931 2880 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:28:43.051855 kubelet[2880]: I1106 00:28:43.051813 2880 kubelet.go:386] "Adding apiserver pod source" Nov 6 00:28:43.054245 kubelet[2880]: I1106 00:28:43.053969 2880 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:28:43.057213 kubelet[2880]: E1106 00:28:43.057152 2880 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.191:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-191&limit=500&resourceVersion=0\": dial tcp 172.31.28.191:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:28:43.066497 kubelet[2880]: E1106 00:28:43.065505 2880 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.191:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.191:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:28:43.066497 kubelet[2880]: I1106 00:28:43.065960 2880 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:28:43.066497 kubelet[2880]: I1106 00:28:43.066415 2880 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:28:43.067486 kubelet[2880]: W1106 00:28:43.067368 2880 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 00:28:43.073038 kubelet[2880]: I1106 00:28:43.072999 2880 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 00:28:43.073160 kubelet[2880]: I1106 00:28:43.073068 2880 server.go:1289] "Started kubelet" Nov 6 00:28:43.077465 kubelet[2880]: I1106 00:28:43.075642 2880 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:28:43.077465 kubelet[2880]: I1106 00:28:43.076748 2880 server.go:317] "Adding debug handlers to kubelet server" Nov 6 00:28:43.078606 kubelet[2880]: I1106 00:28:43.077966 2880 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:28:43.080066 kubelet[2880]: I1106 00:28:43.079788 2880 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:28:43.083518 kubelet[2880]: E1106 00:28:43.079946 2880 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.191:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.191:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-191.1875436353d9e6e5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-191,UID:ip-172-31-28-191,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-191,},FirstTimestamp:2025-11-06 00:28:43.073029861 +0000 UTC m=+0.456102638,LastTimestamp:2025-11-06 00:28:43.073029861 +0000 UTC m=+0.456102638,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-191,}" Nov 6 00:28:43.086106 kubelet[2880]: I1106 00:28:43.085111 2880 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:28:43.089666 kubelet[2880]: I1106 00:28:43.085239 2880 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:28:43.089666 kubelet[2880]: I1106 00:28:43.089136 2880 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 00:28:43.089666 kubelet[2880]: I1106 00:28:43.089574 2880 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 00:28:43.089850 kubelet[2880]: I1106 00:28:43.089722 2880 reconciler.go:26] "Reconciler: start to sync state" Nov 6 00:28:43.090338 kubelet[2880]: E1106 00:28:43.090301 2880 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.191:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 00:28:43.095139 kubelet[2880]: E1106 00:28:43.094870 2880 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-191\" not found" Nov 6 00:28:43.099343 kubelet[2880]: E1106 00:28:43.099304 2880 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-191?timeout=10s\": dial tcp 172.31.28.191:6443: connect: connection refused" interval="200ms" Nov 6 00:28:43.099670 kubelet[2880]: E1106 00:28:43.099647 2880 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:28:43.101223 kubelet[2880]: I1106 00:28:43.101202 2880 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:28:43.101341 kubelet[2880]: I1106 00:28:43.101332 2880 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:28:43.101504 kubelet[2880]: I1106 00:28:43.101485 2880 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:28:43.122469 kubelet[2880]: I1106 00:28:43.122439 2880 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:28:43.122469 kubelet[2880]: I1106 00:28:43.122456 2880 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:28:43.122469 kubelet[2880]: I1106 00:28:43.122476 2880 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:28:43.125294 kubelet[2880]: I1106 00:28:43.125162 2880 policy_none.go:49] "None policy: Start" Nov 6 00:28:43.125294 kubelet[2880]: I1106 00:28:43.125188 2880 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 00:28:43.125294 kubelet[2880]: I1106 00:28:43.125202 2880 state_mem.go:35] "Initializing new in-memory state store" Nov 6 00:28:43.129316 kubelet[2880]: I1106 00:28:43.129266 2880 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 6 00:28:43.133041 kubelet[2880]: I1106 00:28:43.132698 2880 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 6 00:28:43.133041 kubelet[2880]: I1106 00:28:43.132727 2880 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 6 00:28:43.133041 kubelet[2880]: I1106 00:28:43.132753 2880 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:28:43.133041 kubelet[2880]: I1106 00:28:43.132763 2880 kubelet.go:2436] "Starting kubelet main sync loop" Nov 6 00:28:43.133041 kubelet[2880]: E1106 00:28:43.132809 2880 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:28:43.139625 kubelet[2880]: E1106 00:28:43.139551 2880 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.191:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 00:28:43.146335 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 00:28:43.162319 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 00:28:43.166297 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 00:28:43.175971 kubelet[2880]: E1106 00:28:43.175818 2880 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:28:43.176223 kubelet[2880]: I1106 00:28:43.176200 2880 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:28:43.176684 kubelet[2880]: I1106 00:28:43.176362 2880 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:28:43.176684 kubelet[2880]: I1106 00:28:43.176626 2880 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:28:43.178513 kubelet[2880]: E1106 00:28:43.178494 2880 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:28:43.178678 kubelet[2880]: E1106 00:28:43.178663 2880 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-191\" not found" Nov 6 00:28:43.260111 systemd[1]: Created slice kubepods-burstable-pod110b3ae2b1f257ba0208954aa680c557.slice - libcontainer container kubepods-burstable-pod110b3ae2b1f257ba0208954aa680c557.slice. Nov 6 00:28:43.271726 kubelet[2880]: E1106 00:28:43.271683 2880 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-191\" not found" node="ip-172-31-28-191" Nov 6 00:28:43.274430 systemd[1]: Created slice kubepods-burstable-pod211ab5501e823debf512121df5e3aeed.slice - libcontainer container kubepods-burstable-pod211ab5501e823debf512121df5e3aeed.slice. Nov 6 00:28:43.278241 kubelet[2880]: I1106 00:28:43.278216 2880 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-191" Nov 6 00:28:43.278658 kubelet[2880]: E1106 00:28:43.278571 2880 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.191:6443/api/v1/nodes\": dial tcp 172.31.28.191:6443: connect: connection refused" node="ip-172-31-28-191" Nov 6 00:28:43.280085 kubelet[2880]: E1106 00:28:43.280061 2880 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-191\" not found" node="ip-172-31-28-191" Nov 6 00:28:43.283375 systemd[1]: Created slice kubepods-burstable-podcf123470edba01ba5d0ff5224461aa10.slice - libcontainer container kubepods-burstable-podcf123470edba01ba5d0ff5224461aa10.slice. Nov 6 00:28:43.285223 kubelet[2880]: E1106 00:28:43.285196 2880 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-191\" not found" node="ip-172-31-28-191" Nov 6 00:28:43.300479 kubelet[2880]: E1106 00:28:43.300439 2880 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-191?timeout=10s\": dial tcp 172.31.28.191:6443: connect: connection refused" interval="400ms" Nov 6 00:28:43.390946 kubelet[2880]: I1106 00:28:43.390720 2880 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/110b3ae2b1f257ba0208954aa680c557-ca-certs\") pod \"kube-apiserver-ip-172-31-28-191\" (UID: \"110b3ae2b1f257ba0208954aa680c557\") " pod="kube-system/kube-apiserver-ip-172-31-28-191" Nov 6 00:28:43.390946 kubelet[2880]: I1106 00:28:43.390926 2880 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/110b3ae2b1f257ba0208954aa680c557-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-191\" (UID: \"110b3ae2b1f257ba0208954aa680c557\") " pod="kube-system/kube-apiserver-ip-172-31-28-191" Nov 6 00:28:43.390946 kubelet[2880]: I1106 00:28:43.390943 2880 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/211ab5501e823debf512121df5e3aeed-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-191\" (UID: \"211ab5501e823debf512121df5e3aeed\") " pod="kube-system/kube-controller-manager-ip-172-31-28-191" Nov 6 00:28:43.390946 kubelet[2880]: I1106 00:28:43.390959 2880 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/211ab5501e823debf512121df5e3aeed-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-191\" (UID: \"211ab5501e823debf512121df5e3aeed\") " pod="kube-system/kube-controller-manager-ip-172-31-28-191" Nov 6 00:28:43.391180 kubelet[2880]: I1106 00:28:43.390976 2880 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/211ab5501e823debf512121df5e3aeed-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-191\" (UID: \"211ab5501e823debf512121df5e3aeed\") " pod="kube-system/kube-controller-manager-ip-172-31-28-191" Nov 6 00:28:43.391180 kubelet[2880]: I1106 00:28:43.390995 2880 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/110b3ae2b1f257ba0208954aa680c557-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-191\" (UID: \"110b3ae2b1f257ba0208954aa680c557\") " pod="kube-system/kube-apiserver-ip-172-31-28-191" Nov 6 00:28:43.391180 kubelet[2880]: I1106 00:28:43.391010 2880 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/211ab5501e823debf512121df5e3aeed-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-191\" (UID: \"211ab5501e823debf512121df5e3aeed\") " pod="kube-system/kube-controller-manager-ip-172-31-28-191" Nov 6 00:28:43.391180 kubelet[2880]: I1106 00:28:43.391025 2880 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/211ab5501e823debf512121df5e3aeed-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-191\" (UID: \"211ab5501e823debf512121df5e3aeed\") " pod="kube-system/kube-controller-manager-ip-172-31-28-191" Nov 6 00:28:43.391180 kubelet[2880]: I1106 00:28:43.391041 2880 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf123470edba01ba5d0ff5224461aa10-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-191\" (UID: \"cf123470edba01ba5d0ff5224461aa10\") " pod="kube-system/kube-scheduler-ip-172-31-28-191" Nov 6 00:28:43.481024 kubelet[2880]: I1106 00:28:43.480991 2880 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-191" Nov 6 00:28:43.481443 kubelet[2880]: E1106 00:28:43.481407 2880 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.191:6443/api/v1/nodes\": dial tcp 172.31.28.191:6443: connect: connection refused" node="ip-172-31-28-191" Nov 6 00:28:43.573781 containerd[1886]: time="2025-11-06T00:28:43.573598403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-191,Uid:110b3ae2b1f257ba0208954aa680c557,Namespace:kube-system,Attempt:0,}" Nov 6 00:28:43.587920 containerd[1886]: time="2025-11-06T00:28:43.587859416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-191,Uid:211ab5501e823debf512121df5e3aeed,Namespace:kube-system,Attempt:0,}" Nov 6 00:28:43.588886 containerd[1886]: time="2025-11-06T00:28:43.588103224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-191,Uid:cf123470edba01ba5d0ff5224461aa10,Namespace:kube-system,Attempt:0,}" Nov 6 00:28:43.701156 kubelet[2880]: E1106 00:28:43.701106 2880 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-191?timeout=10s\": dial tcp 172.31.28.191:6443: connect: connection refused" interval="800ms" Nov 6 00:28:43.711017 containerd[1886]: time="2025-11-06T00:28:43.710934254Z" level=info msg="connecting to shim c5a2c0a31c2abf7d644ed8fe304e54fe6ab24ea565385593defefd80bb4351e9" address="unix:///run/containerd/s/5382e775e8df1c6775a2262aaac038b4cfbed96865fc2be8e5d79861d3d7034b" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:28:43.720925 containerd[1886]: time="2025-11-06T00:28:43.720874853Z" level=info msg="connecting to shim 50b1bc9a9f834a198208b18035d883f2e732a4d05338c6dbb633612b63ba47aa" address="unix:///run/containerd/s/15b24b360a537c9902625d97a50858fb60855f59640f745a3fa22854fbb5e24c" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:28:43.746605 containerd[1886]: time="2025-11-06T00:28:43.734932226Z" level=info msg="connecting to shim d7d6b66c266234e29c9f4f5923ea2ff102e6761c6dcc51e9acbbb09a394ff138" address="unix:///run/containerd/s/a15a854305132f18f668424808aa4bd0214ff7da9df6b4d8bfaf491a053b66d3" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:28:43.863992 systemd[1]: Started cri-containerd-50b1bc9a9f834a198208b18035d883f2e732a4d05338c6dbb633612b63ba47aa.scope - libcontainer container 50b1bc9a9f834a198208b18035d883f2e732a4d05338c6dbb633612b63ba47aa. Nov 6 00:28:43.870156 systemd[1]: Started cri-containerd-c5a2c0a31c2abf7d644ed8fe304e54fe6ab24ea565385593defefd80bb4351e9.scope - libcontainer container c5a2c0a31c2abf7d644ed8fe304e54fe6ab24ea565385593defefd80bb4351e9. Nov 6 00:28:43.872302 systemd[1]: Started cri-containerd-d7d6b66c266234e29c9f4f5923ea2ff102e6761c6dcc51e9acbbb09a394ff138.scope - libcontainer container d7d6b66c266234e29c9f4f5923ea2ff102e6761c6dcc51e9acbbb09a394ff138. Nov 6 00:28:43.873672 kubelet[2880]: E1106 00:28:43.872632 2880 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.191:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-191&limit=500&resourceVersion=0\": dial tcp 172.31.28.191:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:28:43.887863 kubelet[2880]: I1106 00:28:43.887831 2880 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-191" Nov 6 00:28:43.889210 kubelet[2880]: E1106 00:28:43.889164 2880 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.191:6443/api/v1/nodes\": dial tcp 172.31.28.191:6443: connect: connection refused" node="ip-172-31-28-191" Nov 6 00:28:43.969031 containerd[1886]: time="2025-11-06T00:28:43.968894943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-191,Uid:211ab5501e823debf512121df5e3aeed,Namespace:kube-system,Attempt:0,} returns sandbox id \"50b1bc9a9f834a198208b18035d883f2e732a4d05338c6dbb633612b63ba47aa\"" Nov 6 00:28:43.978871 containerd[1886]: time="2025-11-06T00:28:43.978827206Z" level=info msg="CreateContainer within sandbox \"50b1bc9a9f834a198208b18035d883f2e732a4d05338c6dbb633612b63ba47aa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 00:28:44.005588 containerd[1886]: time="2025-11-06T00:28:44.005518146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-191,Uid:110b3ae2b1f257ba0208954aa680c557,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7d6b66c266234e29c9f4f5923ea2ff102e6761c6dcc51e9acbbb09a394ff138\"" Nov 6 00:28:44.018870 containerd[1886]: time="2025-11-06T00:28:44.018811944Z" level=info msg="CreateContainer within sandbox \"d7d6b66c266234e29c9f4f5923ea2ff102e6761c6dcc51e9acbbb09a394ff138\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 00:28:44.025300 containerd[1886]: time="2025-11-06T00:28:44.025253894Z" level=info msg="Container 33ae8ddfa4c81dc18b0807d475ebe4f103cf9f13d77ebc8adb290e1bec26f779: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:28:44.028100 containerd[1886]: time="2025-11-06T00:28:44.027972818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-191,Uid:cf123470edba01ba5d0ff5224461aa10,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5a2c0a31c2abf7d644ed8fe304e54fe6ab24ea565385593defefd80bb4351e9\"" Nov 6 00:28:44.032395 containerd[1886]: time="2025-11-06T00:28:44.032326000Z" level=info msg="Container 45bd42f4c27cbdc412ad75d7280df627e7f3c5d53021ea6cc5dd87242bdd9b70: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:28:44.032560 containerd[1886]: time="2025-11-06T00:28:44.032537390Z" level=info msg="CreateContainer within sandbox \"c5a2c0a31c2abf7d644ed8fe304e54fe6ab24ea565385593defefd80bb4351e9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 00:28:44.045802 containerd[1886]: time="2025-11-06T00:28:44.045163409Z" level=info msg="Container bedb85a4bb61659ca5b936d6c749fa645c018228047699c94af0b237db14247d: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:28:44.045802 containerd[1886]: time="2025-11-06T00:28:44.045314703Z" level=info msg="CreateContainer within sandbox \"d7d6b66c266234e29c9f4f5923ea2ff102e6761c6dcc51e9acbbb09a394ff138\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"45bd42f4c27cbdc412ad75d7280df627e7f3c5d53021ea6cc5dd87242bdd9b70\"" Nov 6 00:28:44.046137 containerd[1886]: time="2025-11-06T00:28:44.046120219Z" level=info msg="StartContainer for \"45bd42f4c27cbdc412ad75d7280df627e7f3c5d53021ea6cc5dd87242bdd9b70\"" Nov 6 00:28:44.051345 containerd[1886]: time="2025-11-06T00:28:44.051299546Z" level=info msg="connecting to shim 45bd42f4c27cbdc412ad75d7280df627e7f3c5d53021ea6cc5dd87242bdd9b70" address="unix:///run/containerd/s/a15a854305132f18f668424808aa4bd0214ff7da9df6b4d8bfaf491a053b66d3" protocol=ttrpc version=3 Nov 6 00:28:44.053636 containerd[1886]: time="2025-11-06T00:28:44.053609989Z" level=info msg="CreateContainer within sandbox \"50b1bc9a9f834a198208b18035d883f2e732a4d05338c6dbb633612b63ba47aa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"33ae8ddfa4c81dc18b0807d475ebe4f103cf9f13d77ebc8adb290e1bec26f779\"" Nov 6 00:28:44.054236 containerd[1886]: time="2025-11-06T00:28:44.054207784Z" level=info msg="CreateContainer within sandbox \"c5a2c0a31c2abf7d644ed8fe304e54fe6ab24ea565385593defefd80bb4351e9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bedb85a4bb61659ca5b936d6c749fa645c018228047699c94af0b237db14247d\"" Nov 6 00:28:44.054322 containerd[1886]: time="2025-11-06T00:28:44.054200600Z" level=info msg="StartContainer for \"33ae8ddfa4c81dc18b0807d475ebe4f103cf9f13d77ebc8adb290e1bec26f779\"" Nov 6 00:28:44.055152 containerd[1886]: time="2025-11-06T00:28:44.055069285Z" level=info msg="StartContainer for \"bedb85a4bb61659ca5b936d6c749fa645c018228047699c94af0b237db14247d\"" Nov 6 00:28:44.055937 containerd[1886]: time="2025-11-06T00:28:44.055911787Z" level=info msg="connecting to shim bedb85a4bb61659ca5b936d6c749fa645c018228047699c94af0b237db14247d" address="unix:///run/containerd/s/5382e775e8df1c6775a2262aaac038b4cfbed96865fc2be8e5d79861d3d7034b" protocol=ttrpc version=3 Nov 6 00:28:44.057853 containerd[1886]: time="2025-11-06T00:28:44.057812528Z" level=info msg="connecting to shim 33ae8ddfa4c81dc18b0807d475ebe4f103cf9f13d77ebc8adb290e1bec26f779" address="unix:///run/containerd/s/15b24b360a537c9902625d97a50858fb60855f59640f745a3fa22854fbb5e24c" protocol=ttrpc version=3 Nov 6 00:28:44.091804 systemd[1]: Started cri-containerd-45bd42f4c27cbdc412ad75d7280df627e7f3c5d53021ea6cc5dd87242bdd9b70.scope - libcontainer container 45bd42f4c27cbdc412ad75d7280df627e7f3c5d53021ea6cc5dd87242bdd9b70. Nov 6 00:28:44.100930 systemd[1]: Started cri-containerd-33ae8ddfa4c81dc18b0807d475ebe4f103cf9f13d77ebc8adb290e1bec26f779.scope - libcontainer container 33ae8ddfa4c81dc18b0807d475ebe4f103cf9f13d77ebc8adb290e1bec26f779. Nov 6 00:28:44.109903 systemd[1]: Started cri-containerd-bedb85a4bb61659ca5b936d6c749fa645c018228047699c94af0b237db14247d.scope - libcontainer container bedb85a4bb61659ca5b936d6c749fa645c018228047699c94af0b237db14247d. Nov 6 00:28:44.204774 containerd[1886]: time="2025-11-06T00:28:44.203187479Z" level=info msg="StartContainer for \"bedb85a4bb61659ca5b936d6c749fa645c018228047699c94af0b237db14247d\" returns successfully" Nov 6 00:28:44.236046 containerd[1886]: time="2025-11-06T00:28:44.236003260Z" level=info msg="StartContainer for \"45bd42f4c27cbdc412ad75d7280df627e7f3c5d53021ea6cc5dd87242bdd9b70\" returns successfully" Nov 6 00:28:44.251607 containerd[1886]: time="2025-11-06T00:28:44.251100216Z" level=info msg="StartContainer for \"33ae8ddfa4c81dc18b0807d475ebe4f103cf9f13d77ebc8adb290e1bec26f779\" returns successfully" Nov 6 00:28:44.329733 kubelet[2880]: E1106 00:28:44.329690 2880 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.191:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 00:28:44.408016 kubelet[2880]: E1106 00:28:44.407966 2880 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.191:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.191:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:28:44.502331 kubelet[2880]: E1106 00:28:44.502263 2880 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-191?timeout=10s\": dial tcp 172.31.28.191:6443: connect: connection refused" interval="1.6s" Nov 6 00:28:44.652393 kubelet[2880]: E1106 00:28:44.652346 2880 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.191:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 00:28:44.691418 kubelet[2880]: I1106 00:28:44.691387 2880 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-191" Nov 6 00:28:44.691776 kubelet[2880]: E1106 00:28:44.691745 2880 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.191:6443/api/v1/nodes\": dial tcp 172.31.28.191:6443: connect: connection refused" node="ip-172-31-28-191" Nov 6 00:28:45.111063 kubelet[2880]: E1106 00:28:45.111016 2880 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.191:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.191:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 00:28:45.187602 kubelet[2880]: E1106 00:28:45.185952 2880 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-191\" not found" node="ip-172-31-28-191" Nov 6 00:28:45.203611 kubelet[2880]: E1106 00:28:45.201632 2880 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-191\" not found" node="ip-172-31-28-191" Nov 6 00:28:45.203611 kubelet[2880]: E1106 00:28:45.202094 2880 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-191\" not found" node="ip-172-31-28-191" Nov 6 00:28:46.205401 kubelet[2880]: E1106 00:28:46.205364 2880 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-191\" not found" node="ip-172-31-28-191" Nov 6 00:28:46.206531 kubelet[2880]: E1106 00:28:46.206500 2880 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-191\" not found" node="ip-172-31-28-191" Nov 6 00:28:46.207107 kubelet[2880]: E1106 00:28:46.207082 2880 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-191\" not found" node="ip-172-31-28-191" Nov 6 00:28:46.294278 kubelet[2880]: I1106 00:28:46.294248 2880 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-191" Nov 6 00:28:47.205525 kubelet[2880]: E1106 00:28:47.205488 2880 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-191\" not found" node="ip-172-31-28-191" Nov 6 00:28:47.207533 kubelet[2880]: E1106 00:28:47.207502 2880 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-191\" not found" node="ip-172-31-28-191" Nov 6 00:28:48.340418 kubelet[2880]: E1106 00:28:48.340376 2880 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-191\" not found" node="ip-172-31-28-191" Nov 6 00:28:48.390080 kubelet[2880]: E1106 00:28:48.389986 2880 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-28-191.1875436353d9e6e5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-191,UID:ip-172-31-28-191,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-191,},FirstTimestamp:2025-11-06 00:28:43.073029861 +0000 UTC m=+0.456102638,LastTimestamp:2025-11-06 00:28:43.073029861 +0000 UTC m=+0.456102638,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-191,}" Nov 6 00:28:48.445281 kubelet[2880]: E1106 00:28:48.445063 2880 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-28-191.18754363556fb97d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-191,UID:ip-172-31-28-191,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-28-191,},FirstTimestamp:2025-11-06 00:28:43.099625853 +0000 UTC m=+0.482698635,LastTimestamp:2025-11-06 00:28:43.099625853 +0000 UTC m=+0.482698635,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-191,}" Nov 6 00:28:48.455351 kubelet[2880]: I1106 00:28:48.455146 2880 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-191" Nov 6 00:28:48.455351 kubelet[2880]: E1106 00:28:48.455190 2880 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-28-191\": node \"ip-172-31-28-191\" not found" Nov 6 00:28:48.499946 kubelet[2880]: I1106 00:28:48.499898 2880 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-191" Nov 6 00:28:48.509096 kubelet[2880]: E1106 00:28:48.509053 2880 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-191\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-28-191" Nov 6 00:28:48.509435 kubelet[2880]: I1106 00:28:48.509109 2880 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-191" Nov 6 00:28:48.511866 kubelet[2880]: E1106 00:28:48.511837 2880 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-191\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-191" Nov 6 00:28:48.511866 kubelet[2880]: I1106 00:28:48.511865 2880 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-191" Nov 6 00:28:48.516405 kubelet[2880]: E1106 00:28:48.516361 2880 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-191\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-191" Nov 6 00:28:49.061627 kubelet[2880]: I1106 00:28:49.061548 2880 apiserver.go:52] "Watching apiserver" Nov 6 00:28:49.090979 kubelet[2880]: I1106 00:28:49.090920 2880 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 00:28:49.808747 update_engine[1848]: I20251106 00:28:49.808660 1848 update_attempter.cc:509] Updating boot flags... Nov 6 00:28:49.873995 kubelet[2880]: I1106 00:28:49.873491 2880 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-191" Nov 6 00:28:50.726073 systemd[1]: Reload requested from client PID 3432 ('systemctl') (unit session-9.scope)... Nov 6 00:28:50.726090 systemd[1]: Reloading... Nov 6 00:28:50.837609 zram_generator::config[3477]: No configuration found. Nov 6 00:28:51.138868 systemd[1]: Reloading finished in 412 ms. Nov 6 00:28:51.173447 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:28:51.194305 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 00:28:51.195161 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:28:51.195248 systemd[1]: kubelet.service: Consumed 887ms CPU time, 129.3M memory peak. Nov 6 00:28:51.200130 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:28:51.450018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:28:51.462026 (kubelet)[3535]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:28:51.524022 kubelet[3535]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:28:51.526462 kubelet[3535]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:28:51.526462 kubelet[3535]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:28:51.526462 kubelet[3535]: I1106 00:28:51.524704 3535 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:28:51.536303 kubelet[3535]: I1106 00:28:51.536262 3535 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 6 00:28:51.536303 kubelet[3535]: I1106 00:28:51.536305 3535 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:28:51.536981 kubelet[3535]: I1106 00:28:51.536957 3535 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:28:51.541265 kubelet[3535]: I1106 00:28:51.541240 3535 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 6 00:28:51.558253 kubelet[3535]: I1106 00:28:51.558213 3535 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:28:51.594934 kubelet[3535]: I1106 00:28:51.594891 3535 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:28:51.601460 kubelet[3535]: I1106 00:28:51.601414 3535 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 00:28:51.601726 kubelet[3535]: I1106 00:28:51.601688 3535 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:28:51.601911 kubelet[3535]: I1106 00:28:51.601723 3535 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-191","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:28:51.601911 kubelet[3535]: I1106 00:28:51.601894 3535 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:28:51.601911 kubelet[3535]: I1106 00:28:51.601904 3535 container_manager_linux.go:303] "Creating device plugin manager" Nov 6 00:28:51.603627 kubelet[3535]: I1106 00:28:51.603571 3535 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:28:51.605736 kubelet[3535]: I1106 00:28:51.605294 3535 kubelet.go:480] "Attempting to sync node with API server" Nov 6 00:28:51.605736 kubelet[3535]: I1106 00:28:51.605317 3535 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:28:51.605736 kubelet[3535]: I1106 00:28:51.605718 3535 kubelet.go:386] "Adding apiserver pod source" Nov 6 00:28:51.605736 kubelet[3535]: I1106 00:28:51.605732 3535 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:28:51.612615 kubelet[3535]: I1106 00:28:51.612315 3535 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:28:51.613103 kubelet[3535]: I1106 00:28:51.613088 3535 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:28:51.638041 kubelet[3535]: I1106 00:28:51.637997 3535 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 00:28:51.638172 kubelet[3535]: I1106 00:28:51.638071 3535 server.go:1289] "Started kubelet" Nov 6 00:28:51.638317 kubelet[3535]: I1106 00:28:51.638257 3535 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:28:51.638787 kubelet[3535]: I1106 00:28:51.638352 3535 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:28:51.639388 kubelet[3535]: I1106 00:28:51.639007 3535 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:28:51.645568 kubelet[3535]: I1106 00:28:51.645545 3535 server.go:317] "Adding debug handlers to kubelet server" Nov 6 00:28:51.647219 kubelet[3535]: I1106 00:28:51.646956 3535 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:28:51.647219 kubelet[3535]: I1106 00:28:51.647035 3535 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:28:51.651421 kubelet[3535]: I1106 00:28:51.651406 3535 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 00:28:51.651674 kubelet[3535]: I1106 00:28:51.651654 3535 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 00:28:51.651850 kubelet[3535]: I1106 00:28:51.651842 3535 reconciler.go:26] "Reconciler: start to sync state" Nov 6 00:28:51.654019 kubelet[3535]: E1106 00:28:51.653988 3535 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:28:51.655225 kubelet[3535]: I1106 00:28:51.655119 3535 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:28:51.655833 kubelet[3535]: I1106 00:28:51.655766 3535 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:28:51.663685 kubelet[3535]: I1106 00:28:51.663652 3535 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:28:51.664717 kubelet[3535]: I1106 00:28:51.664688 3535 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 6 00:28:51.681548 kubelet[3535]: I1106 00:28:51.681202 3535 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 6 00:28:51.681548 kubelet[3535]: I1106 00:28:51.681229 3535 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 6 00:28:51.681548 kubelet[3535]: I1106 00:28:51.681248 3535 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:28:51.681548 kubelet[3535]: I1106 00:28:51.681254 3535 kubelet.go:2436] "Starting kubelet main sync loop" Nov 6 00:28:51.681548 kubelet[3535]: E1106 00:28:51.681292 3535 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:28:51.724618 kubelet[3535]: I1106 00:28:51.723780 3535 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:28:51.724618 kubelet[3535]: I1106 00:28:51.723799 3535 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:28:51.724618 kubelet[3535]: I1106 00:28:51.723822 3535 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:28:51.724618 kubelet[3535]: I1106 00:28:51.723961 3535 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 00:28:51.724618 kubelet[3535]: I1106 00:28:51.723973 3535 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 00:28:51.724618 kubelet[3535]: I1106 00:28:51.723994 3535 policy_none.go:49] "None policy: Start" Nov 6 00:28:51.724618 kubelet[3535]: I1106 00:28:51.724004 3535 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 00:28:51.724618 kubelet[3535]: I1106 00:28:51.724013 3535 state_mem.go:35] "Initializing new in-memory state store" Nov 6 00:28:51.724618 kubelet[3535]: I1106 00:28:51.724096 3535 state_mem.go:75] "Updated machine memory state" Nov 6 00:28:51.730852 kubelet[3535]: E1106 00:28:51.730822 3535 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:28:51.731147 kubelet[3535]: I1106 00:28:51.731020 3535 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:28:51.731147 kubelet[3535]: I1106 00:28:51.731036 3535 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:28:51.732804 kubelet[3535]: I1106 00:28:51.731448 3535 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:28:51.733719 kubelet[3535]: E1106 00:28:51.733696 3535 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:28:51.782962 kubelet[3535]: I1106 00:28:51.782905 3535 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-191" Nov 6 00:28:51.783804 kubelet[3535]: I1106 00:28:51.783769 3535 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-191" Nov 6 00:28:51.783924 kubelet[3535]: I1106 00:28:51.783908 3535 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-191" Nov 6 00:28:51.791770 kubelet[3535]: E1106 00:28:51.791728 3535 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-191\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-191" Nov 6 00:28:51.841955 kubelet[3535]: I1106 00:28:51.841902 3535 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-191" Nov 6 00:28:51.849392 kubelet[3535]: I1106 00:28:51.849349 3535 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-28-191" Nov 6 00:28:51.849521 kubelet[3535]: I1106 00:28:51.849460 3535 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-191" Nov 6 00:28:51.852773 kubelet[3535]: I1106 00:28:51.852733 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/110b3ae2b1f257ba0208954aa680c557-ca-certs\") pod \"kube-apiserver-ip-172-31-28-191\" (UID: \"110b3ae2b1f257ba0208954aa680c557\") " pod="kube-system/kube-apiserver-ip-172-31-28-191" Nov 6 00:28:51.852909 kubelet[3535]: I1106 00:28:51.852778 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/110b3ae2b1f257ba0208954aa680c557-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-191\" (UID: \"110b3ae2b1f257ba0208954aa680c557\") " pod="kube-system/kube-apiserver-ip-172-31-28-191" Nov 6 00:28:51.852909 kubelet[3535]: I1106 00:28:51.852805 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/110b3ae2b1f257ba0208954aa680c557-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-191\" (UID: \"110b3ae2b1f257ba0208954aa680c557\") " pod="kube-system/kube-apiserver-ip-172-31-28-191" Nov 6 00:28:51.852909 kubelet[3535]: I1106 00:28:51.852827 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/211ab5501e823debf512121df5e3aeed-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-191\" (UID: \"211ab5501e823debf512121df5e3aeed\") " pod="kube-system/kube-controller-manager-ip-172-31-28-191" Nov 6 00:28:51.852909 kubelet[3535]: I1106 00:28:51.852852 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/211ab5501e823debf512121df5e3aeed-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-191\" (UID: \"211ab5501e823debf512121df5e3aeed\") " pod="kube-system/kube-controller-manager-ip-172-31-28-191" Nov 6 00:28:51.852909 kubelet[3535]: I1106 00:28:51.852874 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/211ab5501e823debf512121df5e3aeed-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-191\" (UID: \"211ab5501e823debf512121df5e3aeed\") " pod="kube-system/kube-controller-manager-ip-172-31-28-191" Nov 6 00:28:51.853795 kubelet[3535]: I1106 00:28:51.852896 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/211ab5501e823debf512121df5e3aeed-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-191\" (UID: \"211ab5501e823debf512121df5e3aeed\") " pod="kube-system/kube-controller-manager-ip-172-31-28-191" Nov 6 00:28:51.853795 kubelet[3535]: I1106 00:28:51.852918 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/211ab5501e823debf512121df5e3aeed-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-191\" (UID: \"211ab5501e823debf512121df5e3aeed\") " pod="kube-system/kube-controller-manager-ip-172-31-28-191" Nov 6 00:28:51.853795 kubelet[3535]: I1106 00:28:51.852943 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf123470edba01ba5d0ff5224461aa10-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-191\" (UID: \"cf123470edba01ba5d0ff5224461aa10\") " pod="kube-system/kube-scheduler-ip-172-31-28-191" Nov 6 00:28:52.608894 kubelet[3535]: I1106 00:28:52.608655 3535 apiserver.go:52] "Watching apiserver" Nov 6 00:28:52.652827 kubelet[3535]: I1106 00:28:52.652781 3535 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 00:28:52.720228 kubelet[3535]: I1106 00:28:52.720051 3535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-191" podStartSLOduration=1.720036047 podStartE2EDuration="1.720036047s" podCreationTimestamp="2025-11-06 00:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:28:52.719682225 +0000 UTC m=+1.250266396" watchObservedRunningTime="2025-11-06 00:28:52.720036047 +0000 UTC m=+1.250620196" Nov 6 00:28:52.730274 kubelet[3535]: I1106 00:28:52.730016 3535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-191" podStartSLOduration=3.729978347 podStartE2EDuration="3.729978347s" podCreationTimestamp="2025-11-06 00:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:28:52.728395954 +0000 UTC m=+1.258980123" watchObservedRunningTime="2025-11-06 00:28:52.729978347 +0000 UTC m=+1.260562500" Nov 6 00:28:52.740244 kubelet[3535]: I1106 00:28:52.740161 3535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-191" podStartSLOduration=1.740144498 podStartE2EDuration="1.740144498s" podCreationTimestamp="2025-11-06 00:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:28:52.739837565 +0000 UTC m=+1.270421735" watchObservedRunningTime="2025-11-06 00:28:52.740144498 +0000 UTC m=+1.270728668" Nov 6 00:28:55.792616 kubelet[3535]: I1106 00:28:55.792425 3535 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 00:28:55.794742 containerd[1886]: time="2025-11-06T00:28:55.793537522Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 00:28:55.795127 kubelet[3535]: I1106 00:28:55.794131 3535 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 00:28:56.768630 systemd[1]: Created slice kubepods-besteffort-podf8124a4b_9f27_4356_8f7e_f868843bdde3.slice - libcontainer container kubepods-besteffort-podf8124a4b_9f27_4356_8f7e_f868843bdde3.slice. Nov 6 00:28:56.785604 kubelet[3535]: I1106 00:28:56.785362 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8124a4b-9f27-4356-8f7e-f868843bdde3-xtables-lock\") pod \"kube-proxy-k926b\" (UID: \"f8124a4b-9f27-4356-8f7e-f868843bdde3\") " pod="kube-system/kube-proxy-k926b" Nov 6 00:28:56.785604 kubelet[3535]: I1106 00:28:56.785403 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8m59\" (UniqueName: \"kubernetes.io/projected/f8124a4b-9f27-4356-8f7e-f868843bdde3-kube-api-access-q8m59\") pod \"kube-proxy-k926b\" (UID: \"f8124a4b-9f27-4356-8f7e-f868843bdde3\") " pod="kube-system/kube-proxy-k926b" Nov 6 00:28:56.785604 kubelet[3535]: I1106 00:28:56.785426 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f8124a4b-9f27-4356-8f7e-f868843bdde3-kube-proxy\") pod \"kube-proxy-k926b\" (UID: \"f8124a4b-9f27-4356-8f7e-f868843bdde3\") " pod="kube-system/kube-proxy-k926b" Nov 6 00:28:56.785604 kubelet[3535]: I1106 00:28:56.785443 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8124a4b-9f27-4356-8f7e-f868843bdde3-lib-modules\") pod \"kube-proxy-k926b\" (UID: \"f8124a4b-9f27-4356-8f7e-f868843bdde3\") " pod="kube-system/kube-proxy-k926b" Nov 6 00:28:57.040423 systemd[1]: Created slice kubepods-besteffort-pod317e0ff7_d424_4d37_9d99_73403ee84850.slice - libcontainer container kubepods-besteffort-pod317e0ff7_d424_4d37_9d99_73403ee84850.slice. Nov 6 00:28:57.077797 containerd[1886]: time="2025-11-06T00:28:57.077752372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k926b,Uid:f8124a4b-9f27-4356-8f7e-f868843bdde3,Namespace:kube-system,Attempt:0,}" Nov 6 00:28:57.087465 kubelet[3535]: I1106 00:28:57.087353 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6q7f\" (UniqueName: \"kubernetes.io/projected/317e0ff7-d424-4d37-9d99-73403ee84850-kube-api-access-q6q7f\") pod \"tigera-operator-7dcd859c48-ghsxr\" (UID: \"317e0ff7-d424-4d37-9d99-73403ee84850\") " pod="tigera-operator/tigera-operator-7dcd859c48-ghsxr" Nov 6 00:28:57.088070 kubelet[3535]: I1106 00:28:57.087437 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/317e0ff7-d424-4d37-9d99-73403ee84850-var-lib-calico\") pod \"tigera-operator-7dcd859c48-ghsxr\" (UID: \"317e0ff7-d424-4d37-9d99-73403ee84850\") " pod="tigera-operator/tigera-operator-7dcd859c48-ghsxr" Nov 6 00:28:57.105350 containerd[1886]: time="2025-11-06T00:28:57.105302329Z" level=info msg="connecting to shim c17a99a12021e36617624065a8d128d7256bff48ddc68c12f228bc94f9f576cb" address="unix:///run/containerd/s/c81cec5248056742d9babdc869e5eaa62a26ef0ade260bff801909a564a9be6e" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:28:57.138145 systemd[1]: Started cri-containerd-c17a99a12021e36617624065a8d128d7256bff48ddc68c12f228bc94f9f576cb.scope - libcontainer container c17a99a12021e36617624065a8d128d7256bff48ddc68c12f228bc94f9f576cb. Nov 6 00:28:57.169549 containerd[1886]: time="2025-11-06T00:28:57.169518329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k926b,Uid:f8124a4b-9f27-4356-8f7e-f868843bdde3,Namespace:kube-system,Attempt:0,} returns sandbox id \"c17a99a12021e36617624065a8d128d7256bff48ddc68c12f228bc94f9f576cb\"" Nov 6 00:28:57.178427 containerd[1886]: time="2025-11-06T00:28:57.178349710Z" level=info msg="CreateContainer within sandbox \"c17a99a12021e36617624065a8d128d7256bff48ddc68c12f228bc94f9f576cb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 00:28:57.198205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2308316582.mount: Deactivated successfully. Nov 6 00:28:57.199481 containerd[1886]: time="2025-11-06T00:28:57.199449919Z" level=info msg="Container f1a25da762ee0cb1add5b109203b9540f4d229230fa627ec6b709e43eef58f24: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:28:57.211807 containerd[1886]: time="2025-11-06T00:28:57.211648449Z" level=info msg="CreateContainer within sandbox \"c17a99a12021e36617624065a8d128d7256bff48ddc68c12f228bc94f9f576cb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f1a25da762ee0cb1add5b109203b9540f4d229230fa627ec6b709e43eef58f24\"" Nov 6 00:28:57.212721 containerd[1886]: time="2025-11-06T00:28:57.212673263Z" level=info msg="StartContainer for \"f1a25da762ee0cb1add5b109203b9540f4d229230fa627ec6b709e43eef58f24\"" Nov 6 00:28:57.214665 containerd[1886]: time="2025-11-06T00:28:57.214542871Z" level=info msg="connecting to shim f1a25da762ee0cb1add5b109203b9540f4d229230fa627ec6b709e43eef58f24" address="unix:///run/containerd/s/c81cec5248056742d9babdc869e5eaa62a26ef0ade260bff801909a564a9be6e" protocol=ttrpc version=3 Nov 6 00:28:57.240803 systemd[1]: Started cri-containerd-f1a25da762ee0cb1add5b109203b9540f4d229230fa627ec6b709e43eef58f24.scope - libcontainer container f1a25da762ee0cb1add5b109203b9540f4d229230fa627ec6b709e43eef58f24. Nov 6 00:28:57.293722 containerd[1886]: time="2025-11-06T00:28:57.293589520Z" level=info msg="StartContainer for \"f1a25da762ee0cb1add5b109203b9540f4d229230fa627ec6b709e43eef58f24\" returns successfully" Nov 6 00:28:57.349127 containerd[1886]: time="2025-11-06T00:28:57.349082871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-ghsxr,Uid:317e0ff7-d424-4d37-9d99-73403ee84850,Namespace:tigera-operator,Attempt:0,}" Nov 6 00:28:57.384614 containerd[1886]: time="2025-11-06T00:28:57.383880111Z" level=info msg="connecting to shim 2dd07ba97e3eaf2950efdfb7da22a666ee1a945d92e0ca1bcffcce32c773c83b" address="unix:///run/containerd/s/817bacdd642494d9791ea02a56f1cb45b3345e5a1aaea8bea78cdf928948c3d6" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:28:57.425838 systemd[1]: Started cri-containerd-2dd07ba97e3eaf2950efdfb7da22a666ee1a945d92e0ca1bcffcce32c773c83b.scope - libcontainer container 2dd07ba97e3eaf2950efdfb7da22a666ee1a945d92e0ca1bcffcce32c773c83b. Nov 6 00:28:57.506996 containerd[1886]: time="2025-11-06T00:28:57.506955481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-ghsxr,Uid:317e0ff7-d424-4d37-9d99-73403ee84850,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2dd07ba97e3eaf2950efdfb7da22a666ee1a945d92e0ca1bcffcce32c773c83b\"" Nov 6 00:28:57.508741 containerd[1886]: time="2025-11-06T00:28:57.508515957Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 6 00:28:57.739263 kubelet[3535]: I1106 00:28:57.739207 3535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k926b" podStartSLOduration=1.739190349 podStartE2EDuration="1.739190349s" podCreationTimestamp="2025-11-06 00:28:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:28:57.739081248 +0000 UTC m=+6.269665418" watchObservedRunningTime="2025-11-06 00:28:57.739190349 +0000 UTC m=+6.269774520" Nov 6 00:28:58.987596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3353027417.mount: Deactivated successfully. Nov 6 00:29:00.149576 containerd[1886]: time="2025-11-06T00:29:00.149254130Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:00.153381 containerd[1886]: time="2025-11-06T00:29:00.153294645Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 6 00:29:00.157560 containerd[1886]: time="2025-11-06T00:29:00.156574639Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:00.168024 containerd[1886]: time="2025-11-06T00:29:00.167912834Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:00.174563 containerd[1886]: time="2025-11-06T00:29:00.174481320Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.665502331s" Nov 6 00:29:00.174563 containerd[1886]: time="2025-11-06T00:29:00.174565548Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 6 00:29:00.194047 containerd[1886]: time="2025-11-06T00:29:00.193896670Z" level=info msg="CreateContainer within sandbox \"2dd07ba97e3eaf2950efdfb7da22a666ee1a945d92e0ca1bcffcce32c773c83b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 6 00:29:00.224612 containerd[1886]: time="2025-11-06T00:29:00.224158893Z" level=info msg="Container dd5a43c47dd10683aa96b8d8370b0e4082ca96b42f6a148b5b8dd049a5c01c04: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:29:00.242101 containerd[1886]: time="2025-11-06T00:29:00.241927331Z" level=info msg="CreateContainer within sandbox \"2dd07ba97e3eaf2950efdfb7da22a666ee1a945d92e0ca1bcffcce32c773c83b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dd5a43c47dd10683aa96b8d8370b0e4082ca96b42f6a148b5b8dd049a5c01c04\"" Nov 6 00:29:00.243278 containerd[1886]: time="2025-11-06T00:29:00.243220223Z" level=info msg="StartContainer for \"dd5a43c47dd10683aa96b8d8370b0e4082ca96b42f6a148b5b8dd049a5c01c04\"" Nov 6 00:29:00.244876 containerd[1886]: time="2025-11-06T00:29:00.244838555Z" level=info msg="connecting to shim dd5a43c47dd10683aa96b8d8370b0e4082ca96b42f6a148b5b8dd049a5c01c04" address="unix:///run/containerd/s/817bacdd642494d9791ea02a56f1cb45b3345e5a1aaea8bea78cdf928948c3d6" protocol=ttrpc version=3 Nov 6 00:29:00.275404 systemd[1]: Started cri-containerd-dd5a43c47dd10683aa96b8d8370b0e4082ca96b42f6a148b5b8dd049a5c01c04.scope - libcontainer container dd5a43c47dd10683aa96b8d8370b0e4082ca96b42f6a148b5b8dd049a5c01c04. Nov 6 00:29:00.317516 containerd[1886]: time="2025-11-06T00:29:00.317434108Z" level=info msg="StartContainer for \"dd5a43c47dd10683aa96b8d8370b0e4082ca96b42f6a148b5b8dd049a5c01c04\" returns successfully" Nov 6 00:29:00.729042 kubelet[3535]: I1106 00:29:00.728992 3535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-ghsxr" podStartSLOduration=2.057010922 podStartE2EDuration="4.728975187s" podCreationTimestamp="2025-11-06 00:28:56 +0000 UTC" firstStartedPulling="2025-11-06 00:28:57.508167851 +0000 UTC m=+6.038752002" lastFinishedPulling="2025-11-06 00:29:00.180132105 +0000 UTC m=+8.710716267" observedRunningTime="2025-11-06 00:29:00.728865598 +0000 UTC m=+9.259449765" watchObservedRunningTime="2025-11-06 00:29:00.728975187 +0000 UTC m=+9.259559357" Nov 6 00:29:07.874675 sudo[2300]: pam_unix(sudo:session): session closed for user root Nov 6 00:29:07.896721 sshd[2299]: Connection closed by 147.75.109.163 port 57638 Nov 6 00:29:07.899283 sshd-session[2296]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:07.909158 systemd[1]: sshd@8-172.31.28.191:22-147.75.109.163:57638.service: Deactivated successfully. Nov 6 00:29:07.917067 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 00:29:07.917946 systemd[1]: session-9.scope: Consumed 6.751s CPU time, 154.9M memory peak. Nov 6 00:29:07.921539 systemd-logind[1847]: Session 9 logged out. Waiting for processes to exit. Nov 6 00:29:07.928298 systemd-logind[1847]: Removed session 9. Nov 6 00:29:14.652170 systemd[1]: Created slice kubepods-besteffort-pod3c28274f_53f1_4247_a1e6_e6230bc88361.slice - libcontainer container kubepods-besteffort-pod3c28274f_53f1_4247_a1e6_e6230bc88361.slice. Nov 6 00:29:14.836955 kubelet[3535]: I1106 00:29:14.836890 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3c28274f-53f1-4247-a1e6-e6230bc88361-typha-certs\") pod \"calico-typha-55b4f4fcf4-pv8tv\" (UID: \"3c28274f-53f1-4247-a1e6-e6230bc88361\") " pod="calico-system/calico-typha-55b4f4fcf4-pv8tv" Nov 6 00:29:14.836955 kubelet[3535]: I1106 00:29:14.836935 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbj5j\" (UniqueName: \"kubernetes.io/projected/3c28274f-53f1-4247-a1e6-e6230bc88361-kube-api-access-zbj5j\") pod \"calico-typha-55b4f4fcf4-pv8tv\" (UID: \"3c28274f-53f1-4247-a1e6-e6230bc88361\") " pod="calico-system/calico-typha-55b4f4fcf4-pv8tv" Nov 6 00:29:14.836955 kubelet[3535]: I1106 00:29:14.836955 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c28274f-53f1-4247-a1e6-e6230bc88361-tigera-ca-bundle\") pod \"calico-typha-55b4f4fcf4-pv8tv\" (UID: \"3c28274f-53f1-4247-a1e6-e6230bc88361\") " pod="calico-system/calico-typha-55b4f4fcf4-pv8tv" Nov 6 00:29:14.868630 systemd[1]: Created slice kubepods-besteffort-pode35ca801_eae2_4987_b629_2a64773e570d.slice - libcontainer container kubepods-besteffort-pode35ca801_eae2_4987_b629_2a64773e570d.slice. Nov 6 00:29:14.964451 containerd[1886]: time="2025-11-06T00:29:14.964329810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55b4f4fcf4-pv8tv,Uid:3c28274f-53f1-4247-a1e6-e6230bc88361,Namespace:calico-system,Attempt:0,}" Nov 6 00:29:15.039217 kubelet[3535]: I1106 00:29:15.039168 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e35ca801-eae2-4987-b629-2a64773e570d-cni-bin-dir\") pod \"calico-node-77rxg\" (UID: \"e35ca801-eae2-4987-b629-2a64773e570d\") " pod="calico-system/calico-node-77rxg" Nov 6 00:29:15.040613 kubelet[3535]: I1106 00:29:15.040256 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e35ca801-eae2-4987-b629-2a64773e570d-cni-net-dir\") pod \"calico-node-77rxg\" (UID: \"e35ca801-eae2-4987-b629-2a64773e570d\") " pod="calico-system/calico-node-77rxg" Nov 6 00:29:15.040746 kubelet[3535]: I1106 00:29:15.040689 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e35ca801-eae2-4987-b629-2a64773e570d-var-lib-calico\") pod \"calico-node-77rxg\" (UID: \"e35ca801-eae2-4987-b629-2a64773e570d\") " pod="calico-system/calico-node-77rxg" Nov 6 00:29:15.040879 kubelet[3535]: I1106 00:29:15.040830 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e35ca801-eae2-4987-b629-2a64773e570d-lib-modules\") pod \"calico-node-77rxg\" (UID: \"e35ca801-eae2-4987-b629-2a64773e570d\") " pod="calico-system/calico-node-77rxg" Nov 6 00:29:15.041157 kubelet[3535]: I1106 00:29:15.041133 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e35ca801-eae2-4987-b629-2a64773e570d-node-certs\") pod \"calico-node-77rxg\" (UID: \"e35ca801-eae2-4987-b629-2a64773e570d\") " pod="calico-system/calico-node-77rxg" Nov 6 00:29:15.041353 kubelet[3535]: I1106 00:29:15.041180 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e35ca801-eae2-4987-b629-2a64773e570d-xtables-lock\") pod \"calico-node-77rxg\" (UID: \"e35ca801-eae2-4987-b629-2a64773e570d\") " pod="calico-system/calico-node-77rxg" Nov 6 00:29:15.041353 kubelet[3535]: I1106 00:29:15.041208 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e35ca801-eae2-4987-b629-2a64773e570d-cni-log-dir\") pod \"calico-node-77rxg\" (UID: \"e35ca801-eae2-4987-b629-2a64773e570d\") " pod="calico-system/calico-node-77rxg" Nov 6 00:29:15.041353 kubelet[3535]: I1106 00:29:15.041232 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e35ca801-eae2-4987-b629-2a64773e570d-flexvol-driver-host\") pod \"calico-node-77rxg\" (UID: \"e35ca801-eae2-4987-b629-2a64773e570d\") " pod="calico-system/calico-node-77rxg" Nov 6 00:29:15.041353 kubelet[3535]: I1106 00:29:15.041256 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e35ca801-eae2-4987-b629-2a64773e570d-policysync\") pod \"calico-node-77rxg\" (UID: \"e35ca801-eae2-4987-b629-2a64773e570d\") " pod="calico-system/calico-node-77rxg" Nov 6 00:29:15.041353 kubelet[3535]: I1106 00:29:15.041283 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e35ca801-eae2-4987-b629-2a64773e570d-tigera-ca-bundle\") pod \"calico-node-77rxg\" (UID: \"e35ca801-eae2-4987-b629-2a64773e570d\") " pod="calico-system/calico-node-77rxg" Nov 6 00:29:15.042057 kubelet[3535]: I1106 00:29:15.041309 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e35ca801-eae2-4987-b629-2a64773e570d-var-run-calico\") pod \"calico-node-77rxg\" (UID: \"e35ca801-eae2-4987-b629-2a64773e570d\") " pod="calico-system/calico-node-77rxg" Nov 6 00:29:15.042057 kubelet[3535]: I1106 00:29:15.041336 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsnhh\" (UniqueName: \"kubernetes.io/projected/e35ca801-eae2-4987-b629-2a64773e570d-kube-api-access-tsnhh\") pod \"calico-node-77rxg\" (UID: \"e35ca801-eae2-4987-b629-2a64773e570d\") " pod="calico-system/calico-node-77rxg" Nov 6 00:29:15.047426 containerd[1886]: time="2025-11-06T00:29:15.047375769Z" level=info msg="connecting to shim 35a9c7c6e936c39823582f093db7c1e6c22949bcb8affa1269ac43d04ec8d39f" address="unix:///run/containerd/s/1f4de890de7a909ae6bc902328a9700b4adc41bc4cb48777c4442ce70d57ab77" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:29:15.109736 systemd[1]: Started cri-containerd-35a9c7c6e936c39823582f093db7c1e6c22949bcb8affa1269ac43d04ec8d39f.scope - libcontainer container 35a9c7c6e936c39823582f093db7c1e6c22949bcb8affa1269ac43d04ec8d39f. Nov 6 00:29:15.136429 kubelet[3535]: E1106 00:29:15.133764 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tw7sh" podUID="cdd556f5-82eb-470d-88d2-246c63940429" Nov 6 00:29:15.142955 kubelet[3535]: I1106 00:29:15.141905 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cdd556f5-82eb-470d-88d2-246c63940429-kubelet-dir\") pod \"csi-node-driver-tw7sh\" (UID: \"cdd556f5-82eb-470d-88d2-246c63940429\") " pod="calico-system/csi-node-driver-tw7sh" Nov 6 00:29:15.142955 kubelet[3535]: I1106 00:29:15.141960 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cdd556f5-82eb-470d-88d2-246c63940429-varrun\") pod \"csi-node-driver-tw7sh\" (UID: \"cdd556f5-82eb-470d-88d2-246c63940429\") " pod="calico-system/csi-node-driver-tw7sh" Nov 6 00:29:15.142955 kubelet[3535]: I1106 00:29:15.142101 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cdd556f5-82eb-470d-88d2-246c63940429-registration-dir\") pod \"csi-node-driver-tw7sh\" (UID: \"cdd556f5-82eb-470d-88d2-246c63940429\") " pod="calico-system/csi-node-driver-tw7sh" Nov 6 00:29:15.142955 kubelet[3535]: I1106 00:29:15.142138 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59f95\" (UniqueName: \"kubernetes.io/projected/cdd556f5-82eb-470d-88d2-246c63940429-kube-api-access-59f95\") pod \"csi-node-driver-tw7sh\" (UID: \"cdd556f5-82eb-470d-88d2-246c63940429\") " pod="calico-system/csi-node-driver-tw7sh" Nov 6 00:29:15.142955 kubelet[3535]: I1106 00:29:15.142180 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cdd556f5-82eb-470d-88d2-246c63940429-socket-dir\") pod \"csi-node-driver-tw7sh\" (UID: \"cdd556f5-82eb-470d-88d2-246c63940429\") " pod="calico-system/csi-node-driver-tw7sh" Nov 6 00:29:15.159608 kubelet[3535]: E1106 00:29:15.159332 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.159608 kubelet[3535]: W1106 00:29:15.159364 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.166500 kubelet[3535]: E1106 00:29:15.166457 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.166997 kubelet[3535]: E1106 00:29:15.166976 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.167134 kubelet[3535]: W1106 00:29:15.167118 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.167210 kubelet[3535]: E1106 00:29:15.167199 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.206547 kubelet[3535]: E1106 00:29:15.206502 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.206547 kubelet[3535]: W1106 00:29:15.206535 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.206782 kubelet[3535]: E1106 00:29:15.206558 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.243563 kubelet[3535]: E1106 00:29:15.243477 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.243563 kubelet[3535]: W1106 00:29:15.243506 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.243563 kubelet[3535]: E1106 00:29:15.243531 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.244133 kubelet[3535]: E1106 00:29:15.243869 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.244133 kubelet[3535]: W1106 00:29:15.243885 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.244133 kubelet[3535]: E1106 00:29:15.243901 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.245169 kubelet[3535]: E1106 00:29:15.244606 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.245169 kubelet[3535]: W1106 00:29:15.244623 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.245169 kubelet[3535]: E1106 00:29:15.244641 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.245169 kubelet[3535]: E1106 00:29:15.244880 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.245169 kubelet[3535]: W1106 00:29:15.244891 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.245169 kubelet[3535]: E1106 00:29:15.244905 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.245169 kubelet[3535]: E1106 00:29:15.245137 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.245169 kubelet[3535]: W1106 00:29:15.245148 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.245169 kubelet[3535]: E1106 00:29:15.245162 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.245687 kubelet[3535]: E1106 00:29:15.245669 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.245746 kubelet[3535]: W1106 00:29:15.245688 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.245746 kubelet[3535]: E1106 00:29:15.245701 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.246598 kubelet[3535]: E1106 00:29:15.246518 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.246598 kubelet[3535]: W1106 00:29:15.246534 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.246598 kubelet[3535]: E1106 00:29:15.246547 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.247110 kubelet[3535]: E1106 00:29:15.246851 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.247110 kubelet[3535]: W1106 00:29:15.246866 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.247110 kubelet[3535]: E1106 00:29:15.246880 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.247268 kubelet[3535]: E1106 00:29:15.247149 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.247268 kubelet[3535]: W1106 00:29:15.247160 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.247268 kubelet[3535]: E1106 00:29:15.247173 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.247928 kubelet[3535]: E1106 00:29:15.247401 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.247928 kubelet[3535]: W1106 00:29:15.247421 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.247928 kubelet[3535]: E1106 00:29:15.247432 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.247928 kubelet[3535]: E1106 00:29:15.247699 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.247928 kubelet[3535]: W1106 00:29:15.247710 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.247928 kubelet[3535]: E1106 00:29:15.247722 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.249605 kubelet[3535]: E1106 00:29:15.248691 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.249605 kubelet[3535]: W1106 00:29:15.248707 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.249605 kubelet[3535]: E1106 00:29:15.248721 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.249605 kubelet[3535]: E1106 00:29:15.248964 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.249605 kubelet[3535]: W1106 00:29:15.248974 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.249605 kubelet[3535]: E1106 00:29:15.248986 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.249605 kubelet[3535]: E1106 00:29:15.249208 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.249605 kubelet[3535]: W1106 00:29:15.249218 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.249605 kubelet[3535]: E1106 00:29:15.249234 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.249605 kubelet[3535]: E1106 00:29:15.249468 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.250076 kubelet[3535]: W1106 00:29:15.249477 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.250076 kubelet[3535]: E1106 00:29:15.249489 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.250076 kubelet[3535]: E1106 00:29:15.249711 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.250076 kubelet[3535]: W1106 00:29:15.249721 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.250076 kubelet[3535]: E1106 00:29:15.249733 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.250076 kubelet[3535]: E1106 00:29:15.249945 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.250076 kubelet[3535]: W1106 00:29:15.249955 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.250076 kubelet[3535]: E1106 00:29:15.249965 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.250440 kubelet[3535]: E1106 00:29:15.250212 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.250440 kubelet[3535]: W1106 00:29:15.250222 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.250440 kubelet[3535]: E1106 00:29:15.250234 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.250440 kubelet[3535]: E1106 00:29:15.250403 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.250440 kubelet[3535]: W1106 00:29:15.250411 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.250440 kubelet[3535]: E1106 00:29:15.250421 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.251692 kubelet[3535]: E1106 00:29:15.251672 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.251692 kubelet[3535]: W1106 00:29:15.251691 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.251808 kubelet[3535]: E1106 00:29:15.251705 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.252261 kubelet[3535]: E1106 00:29:15.252239 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.252261 kubelet[3535]: W1106 00:29:15.252259 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.252370 kubelet[3535]: E1106 00:29:15.252273 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.252518 kubelet[3535]: E1106 00:29:15.252503 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.252518 kubelet[3535]: W1106 00:29:15.252518 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.252629 kubelet[3535]: E1106 00:29:15.252530 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.252829 kubelet[3535]: E1106 00:29:15.252811 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.252829 kubelet[3535]: W1106 00:29:15.252829 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.252931 kubelet[3535]: E1106 00:29:15.252841 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.253131 kubelet[3535]: E1106 00:29:15.253113 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.253131 kubelet[3535]: W1106 00:29:15.253130 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.253226 kubelet[3535]: E1106 00:29:15.253143 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.253416 kubelet[3535]: E1106 00:29:15.253399 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.253469 kubelet[3535]: W1106 00:29:15.253416 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.253469 kubelet[3535]: E1106 00:29:15.253432 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.298537 kubelet[3535]: E1106 00:29:15.298437 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:15.298537 kubelet[3535]: W1106 00:29:15.298477 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:15.298537 kubelet[3535]: E1106 00:29:15.298499 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:15.307863 containerd[1886]: time="2025-11-06T00:29:15.307786905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55b4f4fcf4-pv8tv,Uid:3c28274f-53f1-4247-a1e6-e6230bc88361,Namespace:calico-system,Attempt:0,} returns sandbox id \"35a9c7c6e936c39823582f093db7c1e6c22949bcb8affa1269ac43d04ec8d39f\"" Nov 6 00:29:15.317500 containerd[1886]: time="2025-11-06T00:29:15.317456467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 6 00:29:15.473897 containerd[1886]: time="2025-11-06T00:29:15.473859988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-77rxg,Uid:e35ca801-eae2-4987-b629-2a64773e570d,Namespace:calico-system,Attempt:0,}" Nov 6 00:29:15.497528 containerd[1886]: time="2025-11-06T00:29:15.496507281Z" level=info msg="connecting to shim ffe068992621f601e2b875de6cdf50a4f549f828503eb18adc5832af488e0e27" address="unix:///run/containerd/s/391d4d6d6ab6cd785da08bd232316a007f445b2e0fb54049d59404ec492e555c" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:29:15.536846 systemd[1]: Started cri-containerd-ffe068992621f601e2b875de6cdf50a4f549f828503eb18adc5832af488e0e27.scope - libcontainer container ffe068992621f601e2b875de6cdf50a4f549f828503eb18adc5832af488e0e27. Nov 6 00:29:15.627476 containerd[1886]: time="2025-11-06T00:29:15.627418078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-77rxg,Uid:e35ca801-eae2-4987-b629-2a64773e570d,Namespace:calico-system,Attempt:0,} returns sandbox id \"ffe068992621f601e2b875de6cdf50a4f549f828503eb18adc5832af488e0e27\"" Nov 6 00:29:16.682604 kubelet[3535]: E1106 00:29:16.681977 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tw7sh" podUID="cdd556f5-82eb-470d-88d2-246c63940429" Nov 6 00:29:16.705715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount333186884.mount: Deactivated successfully. Nov 6 00:29:17.804926 containerd[1886]: time="2025-11-06T00:29:17.804854955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:17.806060 containerd[1886]: time="2025-11-06T00:29:17.805850142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 6 00:29:17.807191 containerd[1886]: time="2025-11-06T00:29:17.807158337Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:17.809445 containerd[1886]: time="2025-11-06T00:29:17.809409652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:17.810607 containerd[1886]: time="2025-11-06T00:29:17.810136115Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.492631726s" Nov 6 00:29:17.810607 containerd[1886]: time="2025-11-06T00:29:17.810166574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 6 00:29:17.811452 containerd[1886]: time="2025-11-06T00:29:17.811434272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 6 00:29:17.833239 containerd[1886]: time="2025-11-06T00:29:17.833197438Z" level=info msg="CreateContainer within sandbox \"35a9c7c6e936c39823582f093db7c1e6c22949bcb8affa1269ac43d04ec8d39f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 6 00:29:17.842628 containerd[1886]: time="2025-11-06T00:29:17.841636492Z" level=info msg="Container 545d9851d89873665880f556da548f524ba261197171e1800461ed2d74270bc5: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:29:17.852950 containerd[1886]: time="2025-11-06T00:29:17.852902012Z" level=info msg="CreateContainer within sandbox \"35a9c7c6e936c39823582f093db7c1e6c22949bcb8affa1269ac43d04ec8d39f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"545d9851d89873665880f556da548f524ba261197171e1800461ed2d74270bc5\"" Nov 6 00:29:17.855129 containerd[1886]: time="2025-11-06T00:29:17.855087879Z" level=info msg="StartContainer for \"545d9851d89873665880f556da548f524ba261197171e1800461ed2d74270bc5\"" Nov 6 00:29:17.857616 containerd[1886]: time="2025-11-06T00:29:17.856854260Z" level=info msg="connecting to shim 545d9851d89873665880f556da548f524ba261197171e1800461ed2d74270bc5" address="unix:///run/containerd/s/1f4de890de7a909ae6bc902328a9700b4adc41bc4cb48777c4442ce70d57ab77" protocol=ttrpc version=3 Nov 6 00:29:17.953840 systemd[1]: Started cri-containerd-545d9851d89873665880f556da548f524ba261197171e1800461ed2d74270bc5.scope - libcontainer container 545d9851d89873665880f556da548f524ba261197171e1800461ed2d74270bc5. Nov 6 00:29:18.028868 containerd[1886]: time="2025-11-06T00:29:18.028789063Z" level=info msg="StartContainer for \"545d9851d89873665880f556da548f524ba261197171e1800461ed2d74270bc5\" returns successfully" Nov 6 00:29:18.683677 kubelet[3535]: E1106 00:29:18.683624 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tw7sh" podUID="cdd556f5-82eb-470d-88d2-246c63940429" Nov 6 00:29:18.865404 kubelet[3535]: E1106 00:29:18.865353 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.865745 kubelet[3535]: W1106 00:29:18.865384 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.867814 kubelet[3535]: E1106 00:29:18.867758 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.868437 kubelet[3535]: E1106 00:29:18.868390 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.868648 kubelet[3535]: W1106 00:29:18.868410 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.868648 kubelet[3535]: E1106 00:29:18.868540 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.869193 kubelet[3535]: E1106 00:29:18.869178 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.869395 kubelet[3535]: W1106 00:29:18.869297 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.869395 kubelet[3535]: E1106 00:29:18.869319 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.869875 kubelet[3535]: E1106 00:29:18.869856 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.869875 kubelet[3535]: W1106 00:29:18.869872 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.869992 kubelet[3535]: E1106 00:29:18.869891 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.870141 kubelet[3535]: E1106 00:29:18.870121 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.870211 kubelet[3535]: W1106 00:29:18.870136 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.870211 kubelet[3535]: E1106 00:29:18.870168 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.870672 kubelet[3535]: E1106 00:29:18.870645 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.870672 kubelet[3535]: W1106 00:29:18.870663 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.870799 kubelet[3535]: E1106 00:29:18.870678 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.872597 kubelet[3535]: E1106 00:29:18.870910 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.872597 kubelet[3535]: W1106 00:29:18.870942 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.872597 kubelet[3535]: E1106 00:29:18.870956 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.872597 kubelet[3535]: E1106 00:29:18.871205 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.872597 kubelet[3535]: W1106 00:29:18.871214 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.872597 kubelet[3535]: E1106 00:29:18.871226 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.872597 kubelet[3535]: E1106 00:29:18.871524 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.872597 kubelet[3535]: W1106 00:29:18.871537 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.872597 kubelet[3535]: E1106 00:29:18.871550 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.872597 kubelet[3535]: E1106 00:29:18.871835 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.873070 kubelet[3535]: W1106 00:29:18.871846 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.873070 kubelet[3535]: E1106 00:29:18.871858 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.873070 kubelet[3535]: E1106 00:29:18.872099 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.873070 kubelet[3535]: W1106 00:29:18.872109 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.873070 kubelet[3535]: E1106 00:29:18.872139 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.873070 kubelet[3535]: E1106 00:29:18.872383 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.873070 kubelet[3535]: W1106 00:29:18.872395 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.873070 kubelet[3535]: E1106 00:29:18.872407 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.873070 kubelet[3535]: E1106 00:29:18.872722 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.873070 kubelet[3535]: W1106 00:29:18.872734 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.873474 kubelet[3535]: E1106 00:29:18.872756 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.873474 kubelet[3535]: E1106 00:29:18.872999 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.873474 kubelet[3535]: W1106 00:29:18.873010 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.873474 kubelet[3535]: E1106 00:29:18.873021 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.873474 kubelet[3535]: E1106 00:29:18.873265 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.873474 kubelet[3535]: W1106 00:29:18.873275 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.873474 kubelet[3535]: E1106 00:29:18.873296 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.877881 kubelet[3535]: E1106 00:29:18.877848 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.877881 kubelet[3535]: W1106 00:29:18.877879 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.878049 kubelet[3535]: E1106 00:29:18.877900 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.879273 kubelet[3535]: E1106 00:29:18.879247 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.879273 kubelet[3535]: W1106 00:29:18.879268 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.879454 kubelet[3535]: E1106 00:29:18.879287 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.879611 kubelet[3535]: E1106 00:29:18.879546 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.879611 kubelet[3535]: W1106 00:29:18.879572 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.879611 kubelet[3535]: E1106 00:29:18.879603 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.884467 kubelet[3535]: E1106 00:29:18.884436 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.884467 kubelet[3535]: W1106 00:29:18.884459 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.884680 kubelet[3535]: E1106 00:29:18.884482 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.884819 kubelet[3535]: E1106 00:29:18.884801 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.884890 kubelet[3535]: W1106 00:29:18.884818 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.884890 kubelet[3535]: E1106 00:29:18.884834 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.885092 kubelet[3535]: E1106 00:29:18.885076 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.885092 kubelet[3535]: W1106 00:29:18.885090 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.885194 kubelet[3535]: E1106 00:29:18.885103 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.885353 kubelet[3535]: E1106 00:29:18.885337 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.885353 kubelet[3535]: W1106 00:29:18.885351 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.885458 kubelet[3535]: E1106 00:29:18.885363 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.885572 kubelet[3535]: E1106 00:29:18.885556 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.885572 kubelet[3535]: W1106 00:29:18.885569 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.885693 kubelet[3535]: E1106 00:29:18.885602 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.885857 kubelet[3535]: E1106 00:29:18.885840 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.885857 kubelet[3535]: W1106 00:29:18.885854 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.885959 kubelet[3535]: E1106 00:29:18.885866 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.886622 kubelet[3535]: E1106 00:29:18.886600 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.886622 kubelet[3535]: W1106 00:29:18.886618 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.886752 kubelet[3535]: E1106 00:29:18.886632 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.886962 kubelet[3535]: E1106 00:29:18.886939 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.886962 kubelet[3535]: W1106 00:29:18.886958 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.887072 kubelet[3535]: E1106 00:29:18.886974 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.887288 kubelet[3535]: E1106 00:29:18.887270 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.887288 kubelet[3535]: W1106 00:29:18.887285 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.887395 kubelet[3535]: E1106 00:29:18.887299 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.887535 kubelet[3535]: E1106 00:29:18.887519 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.887535 kubelet[3535]: W1106 00:29:18.887532 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.887650 kubelet[3535]: E1106 00:29:18.887544 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.887846 kubelet[3535]: E1106 00:29:18.887827 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.887907 kubelet[3535]: W1106 00:29:18.887857 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.887907 kubelet[3535]: E1106 00:29:18.887872 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.888277 kubelet[3535]: E1106 00:29:18.888259 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.888277 kubelet[3535]: W1106 00:29:18.888273 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.888464 kubelet[3535]: E1106 00:29:18.888286 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.888539 kubelet[3535]: E1106 00:29:18.888522 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.888619 kubelet[3535]: W1106 00:29:18.888542 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.888619 kubelet[3535]: E1106 00:29:18.888555 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.888824 kubelet[3535]: E1106 00:29:18.888806 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.888824 kubelet[3535]: W1106 00:29:18.888821 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.888912 kubelet[3535]: E1106 00:29:18.888835 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:18.889225 kubelet[3535]: E1106 00:29:18.889207 3535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:29:18.889225 kubelet[3535]: W1106 00:29:18.889221 3535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:29:18.889321 kubelet[3535]: E1106 00:29:18.889233 3535 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:29:19.131273 containerd[1886]: time="2025-11-06T00:29:19.131190241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:19.132349 containerd[1886]: time="2025-11-06T00:29:19.132218660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 6 00:29:19.133495 containerd[1886]: time="2025-11-06T00:29:19.133468301Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:19.135547 containerd[1886]: time="2025-11-06T00:29:19.135497899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:19.136378 containerd[1886]: time="2025-11-06T00:29:19.135980938Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.324443392s" Nov 6 00:29:19.136378 containerd[1886]: time="2025-11-06T00:29:19.136010769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 6 00:29:19.141301 containerd[1886]: time="2025-11-06T00:29:19.141267737Z" level=info msg="CreateContainer within sandbox \"ffe068992621f601e2b875de6cdf50a4f549f828503eb18adc5832af488e0e27\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 6 00:29:19.152541 containerd[1886]: time="2025-11-06T00:29:19.151746102Z" level=info msg="Container def54e1ea78460d2dcf6174710a6179a1e445e61d88d4b2b7e42f8f91c3224c7: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:29:19.187945 containerd[1886]: time="2025-11-06T00:29:19.187877885Z" level=info msg="CreateContainer within sandbox \"ffe068992621f601e2b875de6cdf50a4f549f828503eb18adc5832af488e0e27\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"def54e1ea78460d2dcf6174710a6179a1e445e61d88d4b2b7e42f8f91c3224c7\"" Nov 6 00:29:19.189710 containerd[1886]: time="2025-11-06T00:29:19.189554856Z" level=info msg="StartContainer for \"def54e1ea78460d2dcf6174710a6179a1e445e61d88d4b2b7e42f8f91c3224c7\"" Nov 6 00:29:19.192520 containerd[1886]: time="2025-11-06T00:29:19.192476169Z" level=info msg="connecting to shim def54e1ea78460d2dcf6174710a6179a1e445e61d88d4b2b7e42f8f91c3224c7" address="unix:///run/containerd/s/391d4d6d6ab6cd785da08bd232316a007f445b2e0fb54049d59404ec492e555c" protocol=ttrpc version=3 Nov 6 00:29:19.227776 systemd[1]: Started cri-containerd-def54e1ea78460d2dcf6174710a6179a1e445e61d88d4b2b7e42f8f91c3224c7.scope - libcontainer container def54e1ea78460d2dcf6174710a6179a1e445e61d88d4b2b7e42f8f91c3224c7. Nov 6 00:29:19.289323 containerd[1886]: time="2025-11-06T00:29:19.289260031Z" level=info msg="StartContainer for \"def54e1ea78460d2dcf6174710a6179a1e445e61d88d4b2b7e42f8f91c3224c7\" returns successfully" Nov 6 00:29:19.299799 systemd[1]: cri-containerd-def54e1ea78460d2dcf6174710a6179a1e445e61d88d4b2b7e42f8f91c3224c7.scope: Deactivated successfully. Nov 6 00:29:19.324331 containerd[1886]: time="2025-11-06T00:29:19.324257246Z" level=info msg="received exit event container_id:\"def54e1ea78460d2dcf6174710a6179a1e445e61d88d4b2b7e42f8f91c3224c7\" id:\"def54e1ea78460d2dcf6174710a6179a1e445e61d88d4b2b7e42f8f91c3224c7\" pid:4167 exited_at:{seconds:1762388959 nanos:305363157}" Nov 6 00:29:19.338604 containerd[1886]: time="2025-11-06T00:29:19.338510694Z" level=info msg="TaskExit event in podsandbox handler container_id:\"def54e1ea78460d2dcf6174710a6179a1e445e61d88d4b2b7e42f8f91c3224c7\" id:\"def54e1ea78460d2dcf6174710a6179a1e445e61d88d4b2b7e42f8f91c3224c7\" pid:4167 exited_at:{seconds:1762388959 nanos:305363157}" Nov 6 00:29:19.366163 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-def54e1ea78460d2dcf6174710a6179a1e445e61d88d4b2b7e42f8f91c3224c7-rootfs.mount: Deactivated successfully. Nov 6 00:29:19.807881 kubelet[3535]: I1106 00:29:19.807409 3535 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:29:19.809852 containerd[1886]: time="2025-11-06T00:29:19.809816297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 6 00:29:19.834975 kubelet[3535]: I1106 00:29:19.834539 3535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55b4f4fcf4-pv8tv" podStartSLOduration=3.337265959 podStartE2EDuration="5.833398061s" podCreationTimestamp="2025-11-06 00:29:14 +0000 UTC" firstStartedPulling="2025-11-06 00:29:15.315159208 +0000 UTC m=+23.845743357" lastFinishedPulling="2025-11-06 00:29:17.811291307 +0000 UTC m=+26.341875459" observedRunningTime="2025-11-06 00:29:18.805428354 +0000 UTC m=+27.336012529" watchObservedRunningTime="2025-11-06 00:29:19.833398061 +0000 UTC m=+28.363982232" Nov 6 00:29:20.682789 kubelet[3535]: E1106 00:29:20.682722 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tw7sh" podUID="cdd556f5-82eb-470d-88d2-246c63940429" Nov 6 00:29:22.684699 kubelet[3535]: E1106 00:29:22.684603 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tw7sh" podUID="cdd556f5-82eb-470d-88d2-246c63940429" Nov 6 00:29:23.967729 containerd[1886]: time="2025-11-06T00:29:23.967688719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:23.968796 containerd[1886]: time="2025-11-06T00:29:23.968756906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 6 00:29:23.969983 containerd[1886]: time="2025-11-06T00:29:23.969931855Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:23.972673 containerd[1886]: time="2025-11-06T00:29:23.972037169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:23.972673 containerd[1886]: time="2025-11-06T00:29:23.972540208Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.162684211s" Nov 6 00:29:23.972673 containerd[1886]: time="2025-11-06T00:29:23.972565968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 6 00:29:23.978949 containerd[1886]: time="2025-11-06T00:29:23.978910129Z" level=info msg="CreateContainer within sandbox \"ffe068992621f601e2b875de6cdf50a4f549f828503eb18adc5832af488e0e27\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 6 00:29:24.030807 containerd[1886]: time="2025-11-06T00:29:24.030764658Z" level=info msg="Container e97aff73c56d20f3f8d7d95882dbdf7e7e03a469c6aa7edd292778a91bc31099: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:29:24.079869 containerd[1886]: time="2025-11-06T00:29:24.079825681Z" level=info msg="CreateContainer within sandbox \"ffe068992621f601e2b875de6cdf50a4f549f828503eb18adc5832af488e0e27\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e97aff73c56d20f3f8d7d95882dbdf7e7e03a469c6aa7edd292778a91bc31099\"" Nov 6 00:29:24.080753 containerd[1886]: time="2025-11-06T00:29:24.080573457Z" level=info msg="StartContainer for \"e97aff73c56d20f3f8d7d95882dbdf7e7e03a469c6aa7edd292778a91bc31099\"" Nov 6 00:29:24.083241 containerd[1886]: time="2025-11-06T00:29:24.083202784Z" level=info msg="connecting to shim e97aff73c56d20f3f8d7d95882dbdf7e7e03a469c6aa7edd292778a91bc31099" address="unix:///run/containerd/s/391d4d6d6ab6cd785da08bd232316a007f445b2e0fb54049d59404ec492e555c" protocol=ttrpc version=3 Nov 6 00:29:24.109797 systemd[1]: Started cri-containerd-e97aff73c56d20f3f8d7d95882dbdf7e7e03a469c6aa7edd292778a91bc31099.scope - libcontainer container e97aff73c56d20f3f8d7d95882dbdf7e7e03a469c6aa7edd292778a91bc31099. Nov 6 00:29:24.212669 containerd[1886]: time="2025-11-06T00:29:24.212611933Z" level=info msg="StartContainer for \"e97aff73c56d20f3f8d7d95882dbdf7e7e03a469c6aa7edd292778a91bc31099\" returns successfully" Nov 6 00:29:24.682402 kubelet[3535]: E1106 00:29:24.682216 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tw7sh" podUID="cdd556f5-82eb-470d-88d2-246c63940429" Nov 6 00:29:25.449854 systemd[1]: cri-containerd-e97aff73c56d20f3f8d7d95882dbdf7e7e03a469c6aa7edd292778a91bc31099.scope: Deactivated successfully. Nov 6 00:29:25.450098 systemd[1]: cri-containerd-e97aff73c56d20f3f8d7d95882dbdf7e7e03a469c6aa7edd292778a91bc31099.scope: Consumed 612ms CPU time, 163.7M memory peak, 6.5M read from disk, 171.3M written to disk. Nov 6 00:29:25.542868 containerd[1886]: time="2025-11-06T00:29:25.542767361Z" level=info msg="received exit event container_id:\"e97aff73c56d20f3f8d7d95882dbdf7e7e03a469c6aa7edd292778a91bc31099\" id:\"e97aff73c56d20f3f8d7d95882dbdf7e7e03a469c6aa7edd292778a91bc31099\" pid:4225 exited_at:{seconds:1762388965 nanos:541156728}" Nov 6 00:29:25.543910 containerd[1886]: time="2025-11-06T00:29:25.543871706Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e97aff73c56d20f3f8d7d95882dbdf7e7e03a469c6aa7edd292778a91bc31099\" id:\"e97aff73c56d20f3f8d7d95882dbdf7e7e03a469c6aa7edd292778a91bc31099\" pid:4225 exited_at:{seconds:1762388965 nanos:541156728}" Nov 6 00:29:25.554187 kubelet[3535]: I1106 00:29:25.554095 3535 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 6 00:29:25.598027 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e97aff73c56d20f3f8d7d95882dbdf7e7e03a469c6aa7edd292778a91bc31099-rootfs.mount: Deactivated successfully. Nov 6 00:29:25.670821 systemd[1]: Created slice kubepods-besteffort-pod7bcd5d84_f469_41bd_a70e_01d6d2e8ee36.slice - libcontainer container kubepods-besteffort-pod7bcd5d84_f469_41bd_a70e_01d6d2e8ee36.slice. Nov 6 00:29:25.681531 systemd[1]: Created slice kubepods-besteffort-pod07d203b7_097a_40f5_a623_e80d0cafaabf.slice - libcontainer container kubepods-besteffort-pod07d203b7_097a_40f5_a623_e80d0cafaabf.slice. Nov 6 00:29:25.692794 systemd[1]: Created slice kubepods-besteffort-poda5bb8dc2_5212_45e9_b678_f5085dd45c44.slice - libcontainer container kubepods-besteffort-poda5bb8dc2_5212_45e9_b678_f5085dd45c44.slice. Nov 6 00:29:25.714414 systemd[1]: Created slice kubepods-besteffort-pod683394b5_a4c6_4d59_b702_aa09246c75cb.slice - libcontainer container kubepods-besteffort-pod683394b5_a4c6_4d59_b702_aa09246c75cb.slice. Nov 6 00:29:25.725149 systemd[1]: Created slice kubepods-burstable-pod4f7ba5c7_8389_4129_a47c_5c1f9c7a7f26.slice - libcontainer container kubepods-burstable-pod4f7ba5c7_8389_4129_a47c_5c1f9c7a7f26.slice. Nov 6 00:29:25.736324 kubelet[3535]: I1106 00:29:25.736263 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a5bb8dc2-5212-45e9-b678-f5085dd45c44-goldmane-key-pair\") pod \"goldmane-666569f655-h7sr5\" (UID: \"a5bb8dc2-5212-45e9-b678-f5085dd45c44\") " pod="calico-system/goldmane-666569f655-h7sr5" Nov 6 00:29:25.736929 kubelet[3535]: I1106 00:29:25.736428 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/66d96f39-1289-4c11-999f-b99cb505d47d-whisker-backend-key-pair\") pod \"whisker-686bd885d6-r59dn\" (UID: \"66d96f39-1289-4c11-999f-b99cb505d47d\") " pod="calico-system/whisker-686bd885d6-r59dn" Nov 6 00:29:25.737249 kubelet[3535]: I1106 00:29:25.737026 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grgfs\" (UniqueName: \"kubernetes.io/projected/4f7ba5c7-8389-4129-a47c-5c1f9c7a7f26-kube-api-access-grgfs\") pod \"coredns-674b8bbfcf-jvgwl\" (UID: \"4f7ba5c7-8389-4129-a47c-5c1f9c7a7f26\") " pod="kube-system/coredns-674b8bbfcf-jvgwl" Nov 6 00:29:25.737249 kubelet[3535]: I1106 00:29:25.737100 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7bcd5d84-f469-41bd-a70e-01d6d2e8ee36-calico-apiserver-certs\") pod \"calico-apiserver-64c94866d7-nb8z8\" (UID: \"7bcd5d84-f469-41bd-a70e-01d6d2e8ee36\") " pod="calico-apiserver/calico-apiserver-64c94866d7-nb8z8" Nov 6 00:29:25.737249 kubelet[3535]: I1106 00:29:25.737138 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66d96f39-1289-4c11-999f-b99cb505d47d-whisker-ca-bundle\") pod \"whisker-686bd885d6-r59dn\" (UID: \"66d96f39-1289-4c11-999f-b99cb505d47d\") " pod="calico-system/whisker-686bd885d6-r59dn" Nov 6 00:29:25.737249 kubelet[3535]: I1106 00:29:25.737191 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvmr2\" (UniqueName: \"kubernetes.io/projected/fe4f3725-bf7b-43bf-9213-c207f9e2057d-kube-api-access-wvmr2\") pod \"coredns-674b8bbfcf-pcjbp\" (UID: \"fe4f3725-bf7b-43bf-9213-c207f9e2057d\") " pod="kube-system/coredns-674b8bbfcf-pcjbp" Nov 6 00:29:25.737249 kubelet[3535]: I1106 00:29:25.737216 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xsms\" (UniqueName: \"kubernetes.io/projected/683394b5-a4c6-4d59-b702-aa09246c75cb-kube-api-access-5xsms\") pod \"calico-apiserver-64c94866d7-pzj7c\" (UID: \"683394b5-a4c6-4d59-b702-aa09246c75cb\") " pod="calico-apiserver/calico-apiserver-64c94866d7-pzj7c" Nov 6 00:29:25.737839 kubelet[3535]: I1106 00:29:25.737490 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd652\" (UniqueName: \"kubernetes.io/projected/07d203b7-097a-40f5-a623-e80d0cafaabf-kube-api-access-xd652\") pod \"calico-kube-controllers-645cfdc79b-jfhrj\" (UID: \"07d203b7-097a-40f5-a623-e80d0cafaabf\") " pod="calico-system/calico-kube-controllers-645cfdc79b-jfhrj" Nov 6 00:29:25.737839 kubelet[3535]: I1106 00:29:25.737675 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5bb8dc2-5212-45e9-b678-f5085dd45c44-config\") pod \"goldmane-666569f655-h7sr5\" (UID: \"a5bb8dc2-5212-45e9-b678-f5085dd45c44\") " pod="calico-system/goldmane-666569f655-h7sr5" Nov 6 00:29:25.737839 kubelet[3535]: I1106 00:29:25.737710 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5bb8dc2-5212-45e9-b678-f5085dd45c44-goldmane-ca-bundle\") pod \"goldmane-666569f655-h7sr5\" (UID: \"a5bb8dc2-5212-45e9-b678-f5085dd45c44\") " pod="calico-system/goldmane-666569f655-h7sr5" Nov 6 00:29:25.738085 kubelet[3535]: I1106 00:29:25.738030 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp5w9\" (UniqueName: \"kubernetes.io/projected/66d96f39-1289-4c11-999f-b99cb505d47d-kube-api-access-kp5w9\") pod \"whisker-686bd885d6-r59dn\" (UID: \"66d96f39-1289-4c11-999f-b99cb505d47d\") " pod="calico-system/whisker-686bd885d6-r59dn" Nov 6 00:29:25.738194 kubelet[3535]: I1106 00:29:25.738178 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npwg4\" (UniqueName: \"kubernetes.io/projected/a5bb8dc2-5212-45e9-b678-f5085dd45c44-kube-api-access-npwg4\") pod \"goldmane-666569f655-h7sr5\" (UID: \"a5bb8dc2-5212-45e9-b678-f5085dd45c44\") " pod="calico-system/goldmane-666569f655-h7sr5" Nov 6 00:29:25.738404 kubelet[3535]: I1106 00:29:25.738326 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe4f3725-bf7b-43bf-9213-c207f9e2057d-config-volume\") pod \"coredns-674b8bbfcf-pcjbp\" (UID: \"fe4f3725-bf7b-43bf-9213-c207f9e2057d\") " pod="kube-system/coredns-674b8bbfcf-pcjbp" Nov 6 00:29:25.738404 kubelet[3535]: I1106 00:29:25.738371 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pknzr\" (UniqueName: \"kubernetes.io/projected/7bcd5d84-f469-41bd-a70e-01d6d2e8ee36-kube-api-access-pknzr\") pod \"calico-apiserver-64c94866d7-nb8z8\" (UID: \"7bcd5d84-f469-41bd-a70e-01d6d2e8ee36\") " pod="calico-apiserver/calico-apiserver-64c94866d7-nb8z8" Nov 6 00:29:25.740606 kubelet[3535]: I1106 00:29:25.738567 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07d203b7-097a-40f5-a623-e80d0cafaabf-tigera-ca-bundle\") pod \"calico-kube-controllers-645cfdc79b-jfhrj\" (UID: \"07d203b7-097a-40f5-a623-e80d0cafaabf\") " pod="calico-system/calico-kube-controllers-645cfdc79b-jfhrj" Nov 6 00:29:25.741149 kubelet[3535]: I1106 00:29:25.741131 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/683394b5-a4c6-4d59-b702-aa09246c75cb-calico-apiserver-certs\") pod \"calico-apiserver-64c94866d7-pzj7c\" (UID: \"683394b5-a4c6-4d59-b702-aa09246c75cb\") " pod="calico-apiserver/calico-apiserver-64c94866d7-pzj7c" Nov 6 00:29:25.741433 kubelet[3535]: I1106 00:29:25.741333 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f7ba5c7-8389-4129-a47c-5c1f9c7a7f26-config-volume\") pod \"coredns-674b8bbfcf-jvgwl\" (UID: \"4f7ba5c7-8389-4129-a47c-5c1f9c7a7f26\") " pod="kube-system/coredns-674b8bbfcf-jvgwl" Nov 6 00:29:25.747392 systemd[1]: Created slice kubepods-besteffort-pod66d96f39_1289_4c11_999f_b99cb505d47d.slice - libcontainer container kubepods-besteffort-pod66d96f39_1289_4c11_999f_b99cb505d47d.slice. Nov 6 00:29:25.762027 systemd[1]: Created slice kubepods-burstable-podfe4f3725_bf7b_43bf_9213_c207f9e2057d.slice - libcontainer container kubepods-burstable-podfe4f3725_bf7b_43bf_9213_c207f9e2057d.slice. Nov 6 00:29:25.836627 containerd[1886]: time="2025-11-06T00:29:25.835937552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 6 00:29:25.979280 containerd[1886]: time="2025-11-06T00:29:25.979233583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c94866d7-nb8z8,Uid:7bcd5d84-f469-41bd-a70e-01d6d2e8ee36,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:29:25.988490 containerd[1886]: time="2025-11-06T00:29:25.988273518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645cfdc79b-jfhrj,Uid:07d203b7-097a-40f5-a623-e80d0cafaabf,Namespace:calico-system,Attempt:0,}" Nov 6 00:29:26.011386 containerd[1886]: time="2025-11-06T00:29:26.011341879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-h7sr5,Uid:a5bb8dc2-5212-45e9-b678-f5085dd45c44,Namespace:calico-system,Attempt:0,}" Nov 6 00:29:26.030187 containerd[1886]: time="2025-11-06T00:29:26.030146982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c94866d7-pzj7c,Uid:683394b5-a4c6-4d59-b702-aa09246c75cb,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:29:26.039054 containerd[1886]: time="2025-11-06T00:29:26.037571041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jvgwl,Uid:4f7ba5c7-8389-4129-a47c-5c1f9c7a7f26,Namespace:kube-system,Attempt:0,}" Nov 6 00:29:26.060167 containerd[1886]: time="2025-11-06T00:29:26.058868915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-686bd885d6-r59dn,Uid:66d96f39-1289-4c11-999f-b99cb505d47d,Namespace:calico-system,Attempt:0,}" Nov 6 00:29:26.067525 containerd[1886]: time="2025-11-06T00:29:26.067480068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pcjbp,Uid:fe4f3725-bf7b-43bf-9213-c207f9e2057d,Namespace:kube-system,Attempt:0,}" Nov 6 00:29:26.715848 systemd[1]: Created slice kubepods-besteffort-podcdd556f5_82eb_470d_88d2_246c63940429.slice - libcontainer container kubepods-besteffort-podcdd556f5_82eb_470d_88d2_246c63940429.slice. Nov 6 00:29:26.725105 containerd[1886]: time="2025-11-06T00:29:26.725067853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tw7sh,Uid:cdd556f5-82eb-470d-88d2-246c63940429,Namespace:calico-system,Attempt:0,}" Nov 6 00:29:28.623294 containerd[1886]: time="2025-11-06T00:29:28.621949723Z" level=error msg="Failed to destroy network for sandbox \"62770f8ba2d0dc7cb1933d95defd94709a110bafad445e43f6842005318721c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:29:28.626938 systemd[1]: run-netns-cni\x2d2fcc58d2\x2d9f4a\x2d21d3\x2d0df6\x2d982d8b04b553.mount: Deactivated successfully. Nov 6 00:29:28.628534 containerd[1886]: time="2025-11-06T00:29:28.627576090Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-686bd885d6-r59dn,Uid:66d96f39-1289-4c11-999f-b99cb505d47d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62770f8ba2d0dc7cb1933d95defd94709a110bafad445e43f6842005318721c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:29:28.629726 kubelet[3535]: E1106 00:29:28.627992 3535 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62770f8ba2d0dc7cb1933d95defd94709a110bafad445e43f6842005318721c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:29:28.629726 kubelet[3535]: E1106 00:29:28.628087 3535 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62770f8ba2d0dc7cb1933d95defd94709a110bafad445e43f6842005318721c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-686bd885d6-r59dn" Nov 6 00:29:28.629726 kubelet[3535]: E1106 00:29:28.628116 3535 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62770f8ba2d0dc7cb1933d95defd94709a110bafad445e43f6842005318721c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-686bd885d6-r59dn" Nov 6 00:29:28.632121 kubelet[3535]: E1106 00:29:28.628188 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-686bd885d6-r59dn_calico-system(66d96f39-1289-4c11-999f-b99cb505d47d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-686bd885d6-r59dn_calico-system(66d96f39-1289-4c11-999f-b99cb505d47d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62770f8ba2d0dc7cb1933d95defd94709a110bafad445e43f6842005318721c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-686bd885d6-r59dn" podUID="66d96f39-1289-4c11-999f-b99cb505d47d" Nov 6 00:29:28.714308 containerd[1886]: time="2025-11-06T00:29:28.713301398Z" level=error msg="Failed to destroy network for sandbox \"cb46f2b5a82dcba34b9594d28425dea995537f09fb8df26ccc0a491a4303572a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:29:28.718685 systemd[1]: run-netns-cni\x2d8b7fc6c6\x2d5c76\x2da65d\x2db4c2\x2d38a9707ea4c9.mount: Deactivated successfully. Nov 6 00:29:28.726978 containerd[1886]: time="2025-11-06T00:29:28.726926925Z" level=error msg="Failed to destroy network for sandbox \"103a8aeda75263d5982658dffe8b63e945a9b97a2e8d75bf346ced1ebbee8b55\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:29:28.729345 containerd[1886]: time="2025-11-06T00:29:28.729172955Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c94866d7-nb8z8,Uid:7bcd5d84-f469-41bd-a70e-01d6d2e8ee36,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb46f2b5a82dcba34b9594d28425dea995537f09fb8df26ccc0a491a4303572a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:29:28.730656 kubelet[3535]: E1106 00:29:28.729484 3535 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb46f2b5a82dcba34b9594d28425dea995537f09fb8df26ccc0a491a4303572a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:29:28.730656 kubelet[3535]: E1106 00:29:28.729555 3535 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb46f2b5a82dcba34b9594d28425dea995537f09fb8df26ccc0a491a4303572a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64c94866d7-nb8z8" Nov 6 00:29:28.730656 kubelet[3535]: E1106 00:29:28.729659 3535 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb46f2b5a82dcba34b9594d28425dea995537f09fb8df26ccc0a491a4303572a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64c94866d7-nb8z8" Nov 6 00:29:28.732240 kubelet[3535]: E1106 00:29:28.729750 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64c94866d7-nb8z8_calico-apiserver(7bcd5d84-f469-41bd-a70e-01d6d2e8ee36)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64c94866d7-nb8z8_calico-apiserver(7bcd5d84-f469-41bd-a70e-01d6d2e8ee36)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb46f2b5a82dcba34b9594d28425dea995537f09fb8df26ccc0a491a4303572a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64c94866d7-nb8z8" podUID="7bcd5d84-f469-41bd-a70e-01d6d2e8ee36" Nov 6 00:29:28.734516 systemd[1]: run-netns-cni\x2dc40d9e1f\x2d1a29\x2de3dc\x2d0b09\x2dedefb20b4eb0.mount: Deactivated successfully. Nov 6 00:29:28.744677 containerd[1886]: time="2025-11-06T00:29:28.744622324Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645cfdc79b-jfhrj,Uid:07d203b7-097a-40f5-a623-e80d0cafaabf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"103a8aeda75263d5982658dffe8b63e945a9b97a2e8d75bf346ced1ebbee8b55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:29:28.745187 kubelet[3535]: E1106 00:29:28.745139 3535 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"103a8aeda75263d5982658dffe8b63e945a9b97a2e8d75bf346ced1ebbee8b55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:29:28.746440 kubelet[3535]: E1106 00:29:28.745206 3535 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"103a8aeda75263d5982658dffe8b63e945a9b97a2e8d75bf346ced1ebbee8b55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-645cfdc79b-jfhrj" Nov 6 00:29:28.746547 kubelet[3535]: E1106 00:29:28.746456 3535 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"103a8aeda75263d5982658dffe8b63e945a9b97a2e8d75bf346ced1ebbee8b55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-645cfdc79b-jfhrj" Nov 6 00:29:28.747089 kubelet[3535]: E1106 00:29:28.746528 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-645cfdc79b-jfhrj_calico-system(07d203b7-097a-40f5-a623-e80d0cafaabf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-645cfdc79b-jfhrj_calico-system(07d203b7-097a-40f5-a623-e80d0cafaabf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"103a8aeda75263d5982658dffe8b63e945a9b97a2e8d75bf346ced1ebbee8b55\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-645cfdc79b-jfhrj" podUID="07d203b7-097a-40f5-a623-e80d0cafaabf" Nov 6 00:29:28.753038 containerd[1886]: time="2025-11-06T00:29:28.752993418Z" level=error msg="Failed to destroy network for sandbox \"d0430746fc98bd582a5b8d418e2da7b056596aebd4b9acbce97c3bdc9a49a9ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:29:28.758136 systemd[1]: run-netns-cni\x2d927197a6\x2d81fb\x2db19c\x2d30fa\x2df81135ffaecf.mount: Deactivated successfully. Nov 6 00:29:28.764704 containerd[1886]: time="2025-11-06T00:29:28.764652396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tw7sh,Uid:cdd556f5-82eb-470d-88d2-246c63940429,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0430746fc98bd582a5b8d418e2da7b056596aebd4b9acbce97c3bdc9a49a9ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:29:28.765077 kubelet[3535]: E1106 00:29:28.765028 3535 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0430746fc98bd582a5b8d418e2da7b056596aebd4b9acbce97c3bdc9a49a9ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:29:28.765077 kubelet[3535]: E1106 00:29:28.765095 3535 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0430746fc98bd582a5b8d418e2da7b056596aebd4b9acbce97c3bdc9a49a9ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tw7sh" Nov 6 00:29:28.765308 kubelet[3535]: E1106 00:29:28.765124 3535 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0430746fc98bd582a5b8d418e2da7b056596aebd4b9acbce97c3bdc9a49a9ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tw7sh" Nov 6 00:29:28.765308 kubelet[3535]: E1106 00:29:28.765184 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tw7sh_calico-system(cdd556f5-82eb-470d-88d2-246c63940429)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tw7sh_calico-system(cdd556f5-82eb-470d-88d2-246c63940429)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0430746fc98bd582a5b8d418e2da7b056596aebd4b9acbce97c3bdc9a49a9ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tw7sh" podUID="cdd556f5-82eb-470d-88d2-246c63940429" Nov 6 00:29:28.770246 containerd[1886]: time="2025-11-06T00:29:28.770194423Z" level=error msg="Failed to destroy network for sandbox \"ad46ddd4f055f25c5ce16315ae2d5f9aad672989df09ba2d787ff5fa7f6a0923\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:29:28.777690 systemd[1]: run-netns-cni\x2d683801a1\x2dd5e6\x2d9081\x2dc3d9\x2daca225842f84.mount: Deactivated successfully. Nov 6 00:29:28.781201 containerd[1886]: time="2025-11-06T00:29:28.780260522Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c94866d7-pzj7c,Uid:683394b5-a4c6-4d59-b702-aa09246c75cb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad46ddd4f055f25c5ce16315ae2d5f9aad672989df09ba2d787ff5fa7f6a0923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:29:28.781201 containerd[1886]: time="2025-11-06T00:29:28.781057844Z" level=error msg="Failed to destroy network for sandbox \"48434a964be0843e038d3443079655a03f45262476312c6da1bdfa8cc9576242\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:29:28.781410 kubelet[3535]: E1106 00:29:28.780525 3535 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad46ddd4f055f25c5ce16315ae2d5f9aad672989df09ba2d787ff5fa7f6a0923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:29:28.781966 kubelet[3535]: E1106 00:29:28.781532 3535 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad46ddd4f055f25c5ce16315ae2d5f9aad672989df09ba2d787ff5fa7f6a0923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64c94866d7-pzj7c" Nov 6 00:29:28.781966 kubelet[3535]: E1106 00:29:28.781593 3535 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad46ddd4f055f25c5ce16315ae2d5f9aad672989df09ba2d787ff5fa7f6a0923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64c94866d7-pzj7c" Nov 6 00:29:28.785260 kubelet[3535]: E1106 00:29:28.781674 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64c94866d7-pzj7c_calico-apiserver(683394b5-a4c6-4d59-b702-aa09246c75cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64c94866d7-pzj7c_calico-apiserver(683394b5-a4c6-4d59-b702-aa09246c75cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad46ddd4f055f25c5ce16315ae2d5f9aad672989df09ba2d787ff5fa7f6a0923\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64c94866d7-pzj7c" podUID="683394b5-a4c6-4d59-b702-aa09246c75cb" Nov 6 00:29:28.786866 containerd[1886]: time="2025-11-06T00:29:28.786822435Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jvgwl,Uid:4f7ba5c7-8389-4129-a47c-5c1f9c7a7f26,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"48434a964be0843e038d3443079655a03f45262476312c6da1bdfa8cc9576242\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:29:28.788099 kubelet[3535]: E1106 00:29:28.787273 3535 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48434a964be0843e038d3443079655a03f45262476312c6da1bdfa8cc9576242\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:29:28.788099 kubelet[3535]: E1106 00:29:28.787329 3535 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48434a964be0843e038d3443079655a03f45262476312c6da1bdfa8cc9576242\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jvgwl" Nov 6 00:29:28.788099 kubelet[3535]: E1106 00:29:28.787362 3535 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48434a964be0843e038d3443079655a03f45262476312c6da1bdfa8cc9576242\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jvgwl" Nov 6 00:29:28.788300 kubelet[3535]: E1106 00:29:28.787424 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jvgwl_kube-system(4f7ba5c7-8389-4129-a47c-5c1f9c7a7f26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jvgwl_kube-system(4f7ba5c7-8389-4129-a47c-5c1f9c7a7f26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48434a964be0843e038d3443079655a03f45262476312c6da1bdfa8cc9576242\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jvgwl" podUID="4f7ba5c7-8389-4129-a47c-5c1f9c7a7f26" Nov 6 00:29:28.793194 containerd[1886]: time="2025-11-06T00:29:28.793154899Z" level=error msg="Failed to destroy network for sandbox \"749131bd4f6ce74b463fe3e4c9f1301766365b778c69c429fd48e66bbb38b4f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:29:28.794591 containerd[1886]: time="2025-11-06T00:29:28.794542512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-h7sr5,Uid:a5bb8dc2-5212-45e9-b678-f5085dd45c44,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"749131bd4f6ce74b463fe3e4c9f1301766365b778c69c429fd48e66bbb38b4f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:29:28.795127 kubelet[3535]: E1106 00:29:28.794910 3535 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"749131bd4f6ce74b463fe3e4c9f1301766365b778c69c429fd48e66bbb38b4f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:29:28.795127 kubelet[3535]: E1106 00:29:28.794970 3535 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"749131bd4f6ce74b463fe3e4c9f1301766365b778c69c429fd48e66bbb38b4f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-h7sr5" Nov 6 00:29:28.795127 kubelet[3535]: E1106 00:29:28.795002 3535 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"749131bd4f6ce74b463fe3e4c9f1301766365b778c69c429fd48e66bbb38b4f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-h7sr5" Nov 6 00:29:28.795292 kubelet[3535]: E1106 00:29:28.795070 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-h7sr5_calico-system(a5bb8dc2-5212-45e9-b678-f5085dd45c44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-h7sr5_calico-system(a5bb8dc2-5212-45e9-b678-f5085dd45c44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"749131bd4f6ce74b463fe3e4c9f1301766365b778c69c429fd48e66bbb38b4f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-h7sr5" podUID="a5bb8dc2-5212-45e9-b678-f5085dd45c44" Nov 6 00:29:28.795954 containerd[1886]: time="2025-11-06T00:29:28.795925255Z" level=error msg="Failed to destroy network for sandbox \"167ea87a557242e3c08e2bf9761f7272e5f255fb4733832e49a69010125b560b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:29:28.797629 containerd[1886]: time="2025-11-06T00:29:28.797073933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pcjbp,Uid:fe4f3725-bf7b-43bf-9213-c207f9e2057d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"167ea87a557242e3c08e2bf9761f7272e5f255fb4733832e49a69010125b560b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:29:28.797746 kubelet[3535]: E1106 00:29:28.797283 3535 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"167ea87a557242e3c08e2bf9761f7272e5f255fb4733832e49a69010125b560b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:29:28.797746 kubelet[3535]: E1106 00:29:28.797329 3535 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"167ea87a557242e3c08e2bf9761f7272e5f255fb4733832e49a69010125b560b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pcjbp" Nov 6 00:29:28.797746 kubelet[3535]: E1106 00:29:28.797355 3535 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"167ea87a557242e3c08e2bf9761f7272e5f255fb4733832e49a69010125b560b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pcjbp" Nov 6 00:29:28.797881 kubelet[3535]: E1106 00:29:28.797418 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pcjbp_kube-system(fe4f3725-bf7b-43bf-9213-c207f9e2057d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pcjbp_kube-system(fe4f3725-bf7b-43bf-9213-c207f9e2057d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"167ea87a557242e3c08e2bf9761f7272e5f255fb4733832e49a69010125b560b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pcjbp" podUID="fe4f3725-bf7b-43bf-9213-c207f9e2057d" Nov 6 00:29:29.627941 systemd[1]: run-netns-cni\x2d4793bba1\x2dc70b\x2dd934\x2d988d\x2d9b1a1f6d18cb.mount: Deactivated successfully. Nov 6 00:29:29.628442 systemd[1]: run-netns-cni\x2d1caa456d\x2df73e\x2d832a\x2dea72\x2deb296bc2933c.mount: Deactivated successfully. Nov 6 00:29:29.628528 systemd[1]: run-netns-cni\x2dacd661a2\x2dc2ca\x2d4531\x2d34ae\x2dfc1582717221.mount: Deactivated successfully. Nov 6 00:29:33.674473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4290369641.mount: Deactivated successfully. Nov 6 00:29:33.805479 containerd[1886]: time="2025-11-06T00:29:33.803103527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 6 00:29:33.806022 containerd[1886]: time="2025-11-06T00:29:33.794994420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:33.867299 containerd[1886]: time="2025-11-06T00:29:33.867225431Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:33.882765 containerd[1886]: time="2025-11-06T00:29:33.882672339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:29:33.885636 containerd[1886]: time="2025-11-06T00:29:33.885563083Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.047346926s" Nov 6 00:29:33.885636 containerd[1886]: time="2025-11-06T00:29:33.885629499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 6 00:29:34.186540 containerd[1886]: time="2025-11-06T00:29:34.186482369Z" level=info msg="CreateContainer within sandbox \"ffe068992621f601e2b875de6cdf50a4f549f828503eb18adc5832af488e0e27\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 6 00:29:34.320017 containerd[1886]: time="2025-11-06T00:29:34.319804150Z" level=info msg="Container 2ba3a6768df8709f85f94995cde03c91596109532e635673064339c81648b80a: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:29:34.320455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2543226320.mount: Deactivated successfully. Nov 6 00:29:34.375638 containerd[1886]: time="2025-11-06T00:29:34.375561591Z" level=info msg="CreateContainer within sandbox \"ffe068992621f601e2b875de6cdf50a4f549f828503eb18adc5832af488e0e27\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2ba3a6768df8709f85f94995cde03c91596109532e635673064339c81648b80a\"" Nov 6 00:29:34.376370 containerd[1886]: time="2025-11-06T00:29:34.376351209Z" level=info msg="StartContainer for \"2ba3a6768df8709f85f94995cde03c91596109532e635673064339c81648b80a\"" Nov 6 00:29:34.382958 containerd[1886]: time="2025-11-06T00:29:34.382855663Z" level=info msg="connecting to shim 2ba3a6768df8709f85f94995cde03c91596109532e635673064339c81648b80a" address="unix:///run/containerd/s/391d4d6d6ab6cd785da08bd232316a007f445b2e0fb54049d59404ec492e555c" protocol=ttrpc version=3 Nov 6 00:29:34.581851 systemd[1]: Started cri-containerd-2ba3a6768df8709f85f94995cde03c91596109532e635673064339c81648b80a.scope - libcontainer container 2ba3a6768df8709f85f94995cde03c91596109532e635673064339c81648b80a. Nov 6 00:29:34.663726 containerd[1886]: time="2025-11-06T00:29:34.663682675Z" level=info msg="StartContainer for \"2ba3a6768df8709f85f94995cde03c91596109532e635673064339c81648b80a\" returns successfully" Nov 6 00:29:35.004918 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 6 00:29:35.007728 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 6 00:29:35.141431 containerd[1886]: time="2025-11-06T00:29:35.141208614Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ba3a6768df8709f85f94995cde03c91596109532e635673064339c81648b80a\" id:\"915894e92c4ae2252f04c7d801ba4aa0d76005144560d41b8ba6b547b07f237f\" pid:4547 exit_status:1 exited_at:{seconds:1762388975 nanos:140843043}" Nov 6 00:29:36.005884 containerd[1886]: time="2025-11-06T00:29:36.005832954Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ba3a6768df8709f85f94995cde03c91596109532e635673064339c81648b80a\" id:\"7da9b0a0477b9f46c89b53b02c19d92cde8cbb98a9fe069cd8995ce1fb923556\" pid:4589 exit_status:1 exited_at:{seconds:1762388976 nanos:5456559}" Nov 6 00:29:37.009809 containerd[1886]: time="2025-11-06T00:29:37.009759458Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ba3a6768df8709f85f94995cde03c91596109532e635673064339c81648b80a\" id:\"bed77245c94b550014c9a7328772df6868afd8f50f8f97771eb5d97bc82e60d3\" pid:4705 exit_status:1 exited_at:{seconds:1762388977 nanos:9050714}" Nov 6 00:29:37.214619 kubelet[3535]: I1106 00:29:37.214315 3535 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:29:37.580810 kubelet[3535]: I1106 00:29:37.578714 3535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-77rxg" podStartSLOduration=5.321270816 podStartE2EDuration="23.577597422s" podCreationTimestamp="2025-11-06 00:29:14 +0000 UTC" firstStartedPulling="2025-11-06 00:29:15.630325081 +0000 UTC m=+24.160909244" lastFinishedPulling="2025-11-06 00:29:33.886651693 +0000 UTC m=+42.417235850" observedRunningTime="2025-11-06 00:29:34.95765561 +0000 UTC m=+43.488239780" watchObservedRunningTime="2025-11-06 00:29:37.577597422 +0000 UTC m=+46.108181596" Nov 6 00:29:37.736697 kubelet[3535]: I1106 00:29:37.736658 3535 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/66d96f39-1289-4c11-999f-b99cb505d47d-whisker-backend-key-pair\") pod \"66d96f39-1289-4c11-999f-b99cb505d47d\" (UID: \"66d96f39-1289-4c11-999f-b99cb505d47d\") " Nov 6 00:29:37.736840 kubelet[3535]: I1106 00:29:37.736708 3535 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66d96f39-1289-4c11-999f-b99cb505d47d-whisker-ca-bundle\") pod \"66d96f39-1289-4c11-999f-b99cb505d47d\" (UID: \"66d96f39-1289-4c11-999f-b99cb505d47d\") " Nov 6 00:29:37.736840 kubelet[3535]: I1106 00:29:37.736754 3535 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp5w9\" (UniqueName: \"kubernetes.io/projected/66d96f39-1289-4c11-999f-b99cb505d47d-kube-api-access-kp5w9\") pod \"66d96f39-1289-4c11-999f-b99cb505d47d\" (UID: \"66d96f39-1289-4c11-999f-b99cb505d47d\") " Nov 6 00:29:37.758233 systemd[1]: var-lib-kubelet-pods-66d96f39\x2d1289\x2d4c11\x2d999f\x2db99cb505d47d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkp5w9.mount: Deactivated successfully. Nov 6 00:29:37.759840 kubelet[3535]: I1106 00:29:37.759281 3535 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66d96f39-1289-4c11-999f-b99cb505d47d-kube-api-access-kp5w9" (OuterVolumeSpecName: "kube-api-access-kp5w9") pod "66d96f39-1289-4c11-999f-b99cb505d47d" (UID: "66d96f39-1289-4c11-999f-b99cb505d47d"). InnerVolumeSpecName "kube-api-access-kp5w9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:29:37.759840 kubelet[3535]: I1106 00:29:37.759455 3535 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66d96f39-1289-4c11-999f-b99cb505d47d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "66d96f39-1289-4c11-999f-b99cb505d47d" (UID: "66d96f39-1289-4c11-999f-b99cb505d47d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 00:29:37.764154 kubelet[3535]: I1106 00:29:37.764118 3535 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66d96f39-1289-4c11-999f-b99cb505d47d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "66d96f39-1289-4c11-999f-b99cb505d47d" (UID: "66d96f39-1289-4c11-999f-b99cb505d47d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 00:29:37.765195 systemd[1]: var-lib-kubelet-pods-66d96f39\x2d1289\x2d4c11\x2d999f\x2db99cb505d47d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 6 00:29:37.838531 kubelet[3535]: I1106 00:29:37.837721 3535 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66d96f39-1289-4c11-999f-b99cb505d47d-whisker-ca-bundle\") on node \"ip-172-31-28-191\" DevicePath \"\"" Nov 6 00:29:37.838789 kubelet[3535]: I1106 00:29:37.838733 3535 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kp5w9\" (UniqueName: \"kubernetes.io/projected/66d96f39-1289-4c11-999f-b99cb505d47d-kube-api-access-kp5w9\") on node \"ip-172-31-28-191\" DevicePath \"\"" Nov 6 00:29:37.838789 kubelet[3535]: I1106 00:29:37.838763 3535 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/66d96f39-1289-4c11-999f-b99cb505d47d-whisker-backend-key-pair\") on node \"ip-172-31-28-191\" DevicePath \"\"" Nov 6 00:29:37.967308 systemd[1]: Removed slice kubepods-besteffort-pod66d96f39_1289_4c11_999f_b99cb505d47d.slice - libcontainer container kubepods-besteffort-pod66d96f39_1289_4c11_999f_b99cb505d47d.slice. Nov 6 00:29:38.242547 systemd[1]: Created slice kubepods-besteffort-pod06272151_9588_4a91_b3be_275f9fb7fb76.slice - libcontainer container kubepods-besteffort-pod06272151_9588_4a91_b3be_275f9fb7fb76.slice. Nov 6 00:29:38.254307 kubelet[3535]: I1106 00:29:38.254259 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/06272151-9588-4a91-b3be-275f9fb7fb76-whisker-backend-key-pair\") pod \"whisker-8658cddcb4-t8jhs\" (UID: \"06272151-9588-4a91-b3be-275f9fb7fb76\") " pod="calico-system/whisker-8658cddcb4-t8jhs" Nov 6 00:29:38.254809 kubelet[3535]: I1106 00:29:38.254690 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06272151-9588-4a91-b3be-275f9fb7fb76-whisker-ca-bundle\") pod \"whisker-8658cddcb4-t8jhs\" (UID: \"06272151-9588-4a91-b3be-275f9fb7fb76\") " pod="calico-system/whisker-8658cddcb4-t8jhs" Nov 6 00:29:38.254809 kubelet[3535]: I1106 00:29:38.254745 3535 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsl8n\" (UniqueName: \"kubernetes.io/projected/06272151-9588-4a91-b3be-275f9fb7fb76-kube-api-access-vsl8n\") pod \"whisker-8658cddcb4-t8jhs\" (UID: \"06272151-9588-4a91-b3be-275f9fb7fb76\") " pod="calico-system/whisker-8658cddcb4-t8jhs" Nov 6 00:29:38.548130 containerd[1886]: time="2025-11-06T00:29:38.547838022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8658cddcb4-t8jhs,Uid:06272151-9588-4a91-b3be-275f9fb7fb76,Namespace:calico-system,Attempt:0,}" Nov 6 00:29:38.817269 systemd-networkd[1803]: vxlan.calico: Link UP Nov 6 00:29:38.817452 systemd-networkd[1803]: vxlan.calico: Gained carrier Nov 6 00:29:38.878724 (udev-worker)[4813]: Network interface NamePolicy= disabled on kernel command line. Nov 6 00:29:38.879670 (udev-worker)[4825]: Network interface NamePolicy= disabled on kernel command line. Nov 6 00:29:38.880463 (udev-worker)[4826]: Network interface NamePolicy= disabled on kernel command line. Nov 6 00:29:39.683058 containerd[1886]: time="2025-11-06T00:29:39.682983205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tw7sh,Uid:cdd556f5-82eb-470d-88d2-246c63940429,Namespace:calico-system,Attempt:0,}" Nov 6 00:29:39.692858 kubelet[3535]: I1106 00:29:39.692622 3535 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66d96f39-1289-4c11-999f-b99cb505d47d" path="/var/lib/kubelet/pods/66d96f39-1289-4c11-999f-b99cb505d47d/volumes" Nov 6 00:29:40.482836 systemd[1]: Started sshd@9-172.31.28.191:22-147.75.109.163:42928.service - OpenSSH per-connection server daemon (147.75.109.163:42928). Nov 6 00:29:40.682532 containerd[1886]: time="2025-11-06T00:29:40.682489642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-h7sr5,Uid:a5bb8dc2-5212-45e9-b678-f5085dd45c44,Namespace:calico-system,Attempt:0,}" Nov 6 00:29:40.682939 containerd[1886]: time="2025-11-06T00:29:40.682489613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pcjbp,Uid:fe4f3725-bf7b-43bf-9213-c207f9e2057d,Namespace:kube-system,Attempt:0,}" Nov 6 00:29:40.703336 sshd[4892]: Accepted publickey for core from 147.75.109.163 port 42928 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:29:40.706753 sshd-session[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:40.720659 systemd-logind[1847]: New session 10 of user core. Nov 6 00:29:40.727815 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 00:29:40.825194 systemd-networkd[1803]: vxlan.calico: Gained IPv6LL Nov 6 00:29:41.439044 sshd[4916]: Connection closed by 147.75.109.163 port 42928 Nov 6 00:29:41.439429 sshd-session[4892]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:41.446889 systemd[1]: sshd@9-172.31.28.191:22-147.75.109.163:42928.service: Deactivated successfully. Nov 6 00:29:41.449050 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 00:29:41.450895 systemd-logind[1847]: Session 10 logged out. Waiting for processes to exit. Nov 6 00:29:41.452236 systemd-logind[1847]: Removed session 10. Nov 6 00:29:41.700274 containerd[1886]: time="2025-11-06T00:29:41.699981837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645cfdc79b-jfhrj,Uid:07d203b7-097a-40f5-a623-e80d0cafaabf,Namespace:calico-system,Attempt:0,}" Nov 6 00:29:41.700726 containerd[1886]: time="2025-11-06T00:29:41.700396679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c94866d7-nb8z8,Uid:7bcd5d84-f469-41bd-a70e-01d6d2e8ee36,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:29:42.682610 containerd[1886]: time="2025-11-06T00:29:42.682547180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c94866d7-pzj7c,Uid:683394b5-a4c6-4d59-b702-aa09246c75cb,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:29:43.088879 systemd-networkd[1803]: cali104d7af2e2f: Link UP Nov 6 00:29:43.090940 systemd-networkd[1803]: cali104d7af2e2f: Gained carrier Nov 6 00:29:43.095342 (udev-worker)[5008]: Network interface NamePolicy= disabled on kernel command line. Nov 6 00:29:43.123064 containerd[1886]: 2025-11-06 00:29:39.060 [INFO][4785] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--191-k8s-whisker--8658cddcb4--t8jhs-eth0 whisker-8658cddcb4- calico-system 06272151-9588-4a91-b3be-275f9fb7fb76 934 0 2025-11-06 00:29:38 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8658cddcb4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-28-191 whisker-8658cddcb4-t8jhs eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali104d7af2e2f [] [] }} ContainerID="3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d" Namespace="calico-system" Pod="whisker-8658cddcb4-t8jhs" WorkloadEndpoint="ip--172--31--28--191-k8s-whisker--8658cddcb4--t8jhs-" Nov 6 00:29:43.123064 containerd[1886]: 2025-11-06 00:29:39.061 [INFO][4785] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d" Namespace="calico-system" Pod="whisker-8658cddcb4-t8jhs" WorkloadEndpoint="ip--172--31--28--191-k8s-whisker--8658cddcb4--t8jhs-eth0" Nov 6 00:29:43.123064 containerd[1886]: 2025-11-06 00:29:42.936 [INFO][4837] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d" HandleID="k8s-pod-network.3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d" Workload="ip--172--31--28--191-k8s-whisker--8658cddcb4--t8jhs-eth0" Nov 6 00:29:43.123705 containerd[1886]: 2025-11-06 00:29:42.937 [INFO][4837] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d" HandleID="k8s-pod-network.3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d" Workload="ip--172--31--28--191-k8s-whisker--8658cddcb4--t8jhs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038c2b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-191", "pod":"whisker-8658cddcb4-t8jhs", "timestamp":"2025-11-06 00:29:42.936718025 +0000 UTC"}, Hostname:"ip-172-31-28-191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:29:43.123705 containerd[1886]: 2025-11-06 00:29:42.937 [INFO][4837] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:29:43.123705 containerd[1886]: 2025-11-06 00:29:42.937 [INFO][4837] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:29:43.123705 containerd[1886]: 2025-11-06 00:29:42.939 [INFO][4837] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-191' Nov 6 00:29:43.123705 containerd[1886]: 2025-11-06 00:29:42.962 [INFO][4837] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d" host="ip-172-31-28-191" Nov 6 00:29:43.123705 containerd[1886]: 2025-11-06 00:29:43.047 [INFO][4837] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-191" Nov 6 00:29:43.123705 containerd[1886]: 2025-11-06 00:29:43.054 [INFO][4837] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="ip-172-31-28-191" Nov 6 00:29:43.123705 containerd[1886]: 2025-11-06 00:29:43.057 [INFO][4837] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="ip-172-31-28-191" Nov 6 00:29:43.123705 containerd[1886]: 2025-11-06 00:29:43.059 [INFO][4837] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ip-172-31-28-191" Nov 6 00:29:43.124244 containerd[1886]: 2025-11-06 00:29:43.059 [INFO][4837] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d" host="ip-172-31-28-191" Nov 6 00:29:43.124244 containerd[1886]: 2025-11-06 00:29:43.062 [INFO][4837] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d Nov 6 00:29:43.124244 containerd[1886]: 2025-11-06 00:29:43.069 [INFO][4837] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d" host="ip-172-31-28-191" Nov 6 00:29:43.124244 containerd[1886]: 2025-11-06 00:29:43.075 [INFO][4837] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.65/26] block=192.168.126.64/26 handle="k8s-pod-network.3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d" host="ip-172-31-28-191" Nov 6 00:29:43.124244 containerd[1886]: 2025-11-06 00:29:43.075 [INFO][4837] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.65/26] handle="k8s-pod-network.3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d" host="ip-172-31-28-191" Nov 6 00:29:43.124244 containerd[1886]: 2025-11-06 00:29:43.075 [INFO][4837] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:29:43.124244 containerd[1886]: 2025-11-06 00:29:43.076 [INFO][4837] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.65/26] IPv6=[] ContainerID="3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d" HandleID="k8s-pod-network.3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d" Workload="ip--172--31--28--191-k8s-whisker--8658cddcb4--t8jhs-eth0" Nov 6 00:29:43.124545 containerd[1886]: 2025-11-06 00:29:43.079 [INFO][4785] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d" Namespace="calico-system" Pod="whisker-8658cddcb4-t8jhs" WorkloadEndpoint="ip--172--31--28--191-k8s-whisker--8658cddcb4--t8jhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--191-k8s-whisker--8658cddcb4--t8jhs-eth0", GenerateName:"whisker-8658cddcb4-", Namespace:"calico-system", SelfLink:"", UID:"06272151-9588-4a91-b3be-275f9fb7fb76", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 29, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8658cddcb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-191", ContainerID:"", Pod:"whisker-8658cddcb4-t8jhs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.126.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali104d7af2e2f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:29:43.124545 containerd[1886]: 2025-11-06 00:29:43.080 [INFO][4785] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.65/32] ContainerID="3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d" Namespace="calico-system" Pod="whisker-8658cddcb4-t8jhs" WorkloadEndpoint="ip--172--31--28--191-k8s-whisker--8658cddcb4--t8jhs-eth0" Nov 6 00:29:43.124777 containerd[1886]: 2025-11-06 00:29:43.080 [INFO][4785] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali104d7af2e2f ContainerID="3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d" Namespace="calico-system" Pod="whisker-8658cddcb4-t8jhs" WorkloadEndpoint="ip--172--31--28--191-k8s-whisker--8658cddcb4--t8jhs-eth0" Nov 6 00:29:43.124777 containerd[1886]: 2025-11-06 00:29:43.090 [INFO][4785] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d" Namespace="calico-system" Pod="whisker-8658cddcb4-t8jhs" WorkloadEndpoint="ip--172--31--28--191-k8s-whisker--8658cddcb4--t8jhs-eth0" Nov 6 00:29:43.124870 containerd[1886]: 2025-11-06 00:29:43.091 [INFO][4785] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d" Namespace="calico-system" Pod="whisker-8658cddcb4-t8jhs" WorkloadEndpoint="ip--172--31--28--191-k8s-whisker--8658cddcb4--t8jhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--191-k8s-whisker--8658cddcb4--t8jhs-eth0", GenerateName:"whisker-8658cddcb4-", Namespace:"calico-system", SelfLink:"", UID:"06272151-9588-4a91-b3be-275f9fb7fb76", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 29, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8658cddcb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-191", ContainerID:"3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d", Pod:"whisker-8658cddcb4-t8jhs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.126.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali104d7af2e2f", MAC:"02:d7:6a:6b:72:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:29:43.124970 containerd[1886]: 2025-11-06 00:29:43.114 [INFO][4785] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d" Namespace="calico-system" Pod="whisker-8658cddcb4-t8jhs" WorkloadEndpoint="ip--172--31--28--191-k8s-whisker--8658cddcb4--t8jhs-eth0" Nov 6 00:29:43.226201 (udev-worker)[5014]: Network interface NamePolicy= disabled on kernel command line. Nov 6 00:29:43.230669 systemd-networkd[1803]: calie8f019e6ae3: Link UP Nov 6 00:29:43.231007 systemd-networkd[1803]: calie8f019e6ae3: Gained carrier Nov 6 00:29:43.261760 containerd[1886]: 2025-11-06 00:29:40.783 [INFO][4895] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--191-k8s-goldmane--666569f655--h7sr5-eth0 goldmane-666569f655- calico-system a5bb8dc2-5212-45e9-b678-f5085dd45c44 851 0 2025-11-06 00:29:12 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-28-191 goldmane-666569f655-h7sr5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie8f019e6ae3 [] [] }} ContainerID="2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9" Namespace="calico-system" Pod="goldmane-666569f655-h7sr5" WorkloadEndpoint="ip--172--31--28--191-k8s-goldmane--666569f655--h7sr5-" Nov 6 00:29:43.261760 containerd[1886]: 2025-11-06 00:29:40.783 [INFO][4895] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9" Namespace="calico-system" Pod="goldmane-666569f655-h7sr5" WorkloadEndpoint="ip--172--31--28--191-k8s-goldmane--666569f655--h7sr5-eth0" Nov 6 00:29:43.261760 containerd[1886]: 2025-11-06 00:29:42.935 [INFO][4920] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9" HandleID="k8s-pod-network.2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9" Workload="ip--172--31--28--191-k8s-goldmane--666569f655--h7sr5-eth0" Nov 6 00:29:43.262067 containerd[1886]: 2025-11-06 00:29:42.937 [INFO][4920] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9" HandleID="k8s-pod-network.2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9" Workload="ip--172--31--28--191-k8s-goldmane--666569f655--h7sr5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001024e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-191", "pod":"goldmane-666569f655-h7sr5", "timestamp":"2025-11-06 00:29:42.935612299 +0000 UTC"}, Hostname:"ip-172-31-28-191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:29:43.262067 containerd[1886]: 2025-11-06 00:29:42.937 [INFO][4920] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:29:43.262067 containerd[1886]: 2025-11-06 00:29:43.076 [INFO][4920] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:29:43.262067 containerd[1886]: 2025-11-06 00:29:43.077 [INFO][4920] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-191' Nov 6 00:29:43.262067 containerd[1886]: 2025-11-06 00:29:43.096 [INFO][4920] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9" host="ip-172-31-28-191" Nov 6 00:29:43.262067 containerd[1886]: 2025-11-06 00:29:43.152 [INFO][4920] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-191" Nov 6 00:29:43.262067 containerd[1886]: 2025-11-06 00:29:43.173 [INFO][4920] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="ip-172-31-28-191" Nov 6 00:29:43.262067 containerd[1886]: 2025-11-06 00:29:43.177 [INFO][4920] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="ip-172-31-28-191" Nov 6 00:29:43.262067 containerd[1886]: 2025-11-06 00:29:43.183 [INFO][4920] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ip-172-31-28-191" Nov 6 00:29:43.262445 containerd[1886]: 2025-11-06 00:29:43.184 [INFO][4920] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9" host="ip-172-31-28-191" Nov 6 00:29:43.262445 containerd[1886]: 2025-11-06 00:29:43.189 [INFO][4920] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9 Nov 6 00:29:43.262445 containerd[1886]: 2025-11-06 00:29:43.198 [INFO][4920] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9" host="ip-172-31-28-191" Nov 6 00:29:43.262445 containerd[1886]: 2025-11-06 00:29:43.214 [INFO][4920] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.66/26] block=192.168.126.64/26 handle="k8s-pod-network.2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9" host="ip-172-31-28-191" Nov 6 00:29:43.262445 containerd[1886]: 2025-11-06 00:29:43.214 [INFO][4920] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.66/26] handle="k8s-pod-network.2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9" host="ip-172-31-28-191" Nov 6 00:29:43.262445 containerd[1886]: 2025-11-06 00:29:43.214 [INFO][4920] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:29:43.262445 containerd[1886]: 2025-11-06 00:29:43.214 [INFO][4920] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.66/26] IPv6=[] ContainerID="2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9" HandleID="k8s-pod-network.2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9" Workload="ip--172--31--28--191-k8s-goldmane--666569f655--h7sr5-eth0" Nov 6 00:29:43.264792 containerd[1886]: 2025-11-06 00:29:43.223 [INFO][4895] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9" Namespace="calico-system" Pod="goldmane-666569f655-h7sr5" WorkloadEndpoint="ip--172--31--28--191-k8s-goldmane--666569f655--h7sr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--191-k8s-goldmane--666569f655--h7sr5-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a5bb8dc2-5212-45e9-b678-f5085dd45c44", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 29, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-191", ContainerID:"", Pod:"goldmane-666569f655-h7sr5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.126.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie8f019e6ae3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:29:43.264792 containerd[1886]: 2025-11-06 00:29:43.224 [INFO][4895] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.66/32] ContainerID="2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9" Namespace="calico-system" Pod="goldmane-666569f655-h7sr5" WorkloadEndpoint="ip--172--31--28--191-k8s-goldmane--666569f655--h7sr5-eth0" Nov 6 00:29:43.265431 containerd[1886]: 2025-11-06 00:29:43.224 [INFO][4895] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8f019e6ae3 ContainerID="2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9" Namespace="calico-system" Pod="goldmane-666569f655-h7sr5" WorkloadEndpoint="ip--172--31--28--191-k8s-goldmane--666569f655--h7sr5-eth0" Nov 6 00:29:43.265431 containerd[1886]: 2025-11-06 00:29:43.229 [INFO][4895] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9" Namespace="calico-system" Pod="goldmane-666569f655-h7sr5" WorkloadEndpoint="ip--172--31--28--191-k8s-goldmane--666569f655--h7sr5-eth0" Nov 6 00:29:43.265542 containerd[1886]: 2025-11-06 00:29:43.230 [INFO][4895] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9" Namespace="calico-system" Pod="goldmane-666569f655-h7sr5" WorkloadEndpoint="ip--172--31--28--191-k8s-goldmane--666569f655--h7sr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--191-k8s-goldmane--666569f655--h7sr5-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a5bb8dc2-5212-45e9-b678-f5085dd45c44", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 29, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-191", ContainerID:"2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9", Pod:"goldmane-666569f655-h7sr5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.126.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie8f019e6ae3", MAC:"5a:ba:ca:21:67:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:29:43.266695 containerd[1886]: 2025-11-06 00:29:43.256 [INFO][4895] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9" Namespace="calico-system" Pod="goldmane-666569f655-h7sr5" WorkloadEndpoint="ip--172--31--28--191-k8s-goldmane--666569f655--h7sr5-eth0" Nov 6 00:29:43.366873 systemd-networkd[1803]: cali882487907ab: Link UP Nov 6 00:29:43.367171 systemd-networkd[1803]: cali882487907ab: Gained carrier Nov 6 00:29:43.440631 containerd[1886]: 2025-11-06 00:29:42.739 [INFO][4989] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--pzj7c-eth0 calico-apiserver-64c94866d7- calico-apiserver 683394b5-a4c6-4d59-b702-aa09246c75cb 852 0 2025-11-06 00:29:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64c94866d7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-191 calico-apiserver-64c94866d7-pzj7c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali882487907ab [] [] }} ContainerID="fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231" Namespace="calico-apiserver" Pod="calico-apiserver-64c94866d7-pzj7c" WorkloadEndpoint="ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--pzj7c-" Nov 6 00:29:43.440631 containerd[1886]: 2025-11-06 00:29:42.739 [INFO][4989] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231" Namespace="calico-apiserver" Pod="calico-apiserver-64c94866d7-pzj7c" WorkloadEndpoint="ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--pzj7c-eth0" Nov 6 00:29:43.440631 containerd[1886]: 2025-11-06 00:29:42.933 [INFO][5002] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231" HandleID="k8s-pod-network.fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231" Workload="ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--pzj7c-eth0" Nov 6 00:29:43.440939 containerd[1886]: 2025-11-06 00:29:42.937 [INFO][5002] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231" HandleID="k8s-pod-network.fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231" Workload="ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--pzj7c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f250), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-191", "pod":"calico-apiserver-64c94866d7-pzj7c", "timestamp":"2025-11-06 00:29:42.933458378 +0000 UTC"}, Hostname:"ip-172-31-28-191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:29:43.440939 containerd[1886]: 2025-11-06 00:29:42.939 [INFO][5002] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:29:43.440939 containerd[1886]: 2025-11-06 00:29:43.214 [INFO][5002] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:29:43.440939 containerd[1886]: 2025-11-06 00:29:43.215 [INFO][5002] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-191' Nov 6 00:29:43.440939 containerd[1886]: 2025-11-06 00:29:43.241 [INFO][5002] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231" host="ip-172-31-28-191" Nov 6 00:29:43.440939 containerd[1886]: 2025-11-06 00:29:43.255 [INFO][5002] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-191" Nov 6 00:29:43.440939 containerd[1886]: 2025-11-06 00:29:43.272 [INFO][5002] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="ip-172-31-28-191" Nov 6 00:29:43.440939 containerd[1886]: 2025-11-06 00:29:43.276 [INFO][5002] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="ip-172-31-28-191" Nov 6 00:29:43.440939 containerd[1886]: 2025-11-06 00:29:43.284 [INFO][5002] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ip-172-31-28-191" Nov 6 00:29:43.441285 containerd[1886]: 2025-11-06 00:29:43.291 [INFO][5002] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231" host="ip-172-31-28-191" Nov 6 00:29:43.441285 containerd[1886]: 2025-11-06 00:29:43.301 [INFO][5002] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231 Nov 6 00:29:43.441285 containerd[1886]: 2025-11-06 00:29:43.313 [INFO][5002] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231" host="ip-172-31-28-191" Nov 6 00:29:43.441285 containerd[1886]: 2025-11-06 00:29:43.342 [INFO][5002] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.67/26] block=192.168.126.64/26 handle="k8s-pod-network.fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231" host="ip-172-31-28-191" Nov 6 00:29:43.441285 containerd[1886]: 2025-11-06 00:29:43.342 [INFO][5002] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.67/26] handle="k8s-pod-network.fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231" host="ip-172-31-28-191" Nov 6 00:29:43.441285 containerd[1886]: 2025-11-06 00:29:43.342 [INFO][5002] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:29:43.441285 containerd[1886]: 2025-11-06 00:29:43.345 [INFO][5002] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.67/26] IPv6=[] ContainerID="fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231" HandleID="k8s-pod-network.fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231" Workload="ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--pzj7c-eth0" Nov 6 00:29:43.446778 containerd[1886]: 2025-11-06 00:29:43.358 [INFO][4989] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231" Namespace="calico-apiserver" Pod="calico-apiserver-64c94866d7-pzj7c" WorkloadEndpoint="ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--pzj7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--pzj7c-eth0", GenerateName:"calico-apiserver-64c94866d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"683394b5-a4c6-4d59-b702-aa09246c75cb", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 29, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c94866d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-191", ContainerID:"", Pod:"calico-apiserver-64c94866d7-pzj7c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali882487907ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:29:43.446913 containerd[1886]: 2025-11-06 00:29:43.359 [INFO][4989] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.67/32] ContainerID="fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231" Namespace="calico-apiserver" Pod="calico-apiserver-64c94866d7-pzj7c" WorkloadEndpoint="ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--pzj7c-eth0" Nov 6 00:29:43.446913 containerd[1886]: 2025-11-06 00:29:43.359 [INFO][4989] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali882487907ab ContainerID="fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231" Namespace="calico-apiserver" Pod="calico-apiserver-64c94866d7-pzj7c" WorkloadEndpoint="ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--pzj7c-eth0" Nov 6 00:29:43.446913 containerd[1886]: 2025-11-06 00:29:43.367 [INFO][4989] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231" Namespace="calico-apiserver" Pod="calico-apiserver-64c94866d7-pzj7c" WorkloadEndpoint="ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--pzj7c-eth0" Nov 6 00:29:43.447045 containerd[1886]: 2025-11-06 00:29:43.372 [INFO][4989] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231" Namespace="calico-apiserver" Pod="calico-apiserver-64c94866d7-pzj7c" WorkloadEndpoint="ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--pzj7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--pzj7c-eth0", GenerateName:"calico-apiserver-64c94866d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"683394b5-a4c6-4d59-b702-aa09246c75cb", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 29, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c94866d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-191", ContainerID:"fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231", Pod:"calico-apiserver-64c94866d7-pzj7c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali882487907ab", MAC:"7e:7b:93:67:a0:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:29:43.447139 containerd[1886]: 2025-11-06 00:29:43.413 [INFO][4989] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231" Namespace="calico-apiserver" Pod="calico-apiserver-64c94866d7-pzj7c" WorkloadEndpoint="ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--pzj7c-eth0" Nov 6 00:29:43.557951 systemd-networkd[1803]: califdae6b45193: Link UP Nov 6 00:29:43.560101 systemd-networkd[1803]: califdae6b45193: Gained carrier Nov 6 00:29:43.609523 containerd[1886]: 2025-11-06 00:29:39.730 [INFO][4868] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--191-k8s-csi--node--driver--tw7sh-eth0 csi-node-driver- calico-system cdd556f5-82eb-470d-88d2-246c63940429 742 0 2025-11-06 00:29:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-28-191 csi-node-driver-tw7sh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califdae6b45193 [] [] }} ContainerID="57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1" Namespace="calico-system" Pod="csi-node-driver-tw7sh" WorkloadEndpoint="ip--172--31--28--191-k8s-csi--node--driver--tw7sh-" Nov 6 00:29:43.609523 containerd[1886]: 2025-11-06 00:29:39.731 [INFO][4868] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1" Namespace="calico-system" Pod="csi-node-driver-tw7sh" WorkloadEndpoint="ip--172--31--28--191-k8s-csi--node--driver--tw7sh-eth0" Nov 6 00:29:43.609523 containerd[1886]: 2025-11-06 00:29:42.937 [INFO][4881] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1" HandleID="k8s-pod-network.57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1" Workload="ip--172--31--28--191-k8s-csi--node--driver--tw7sh-eth0" Nov 6 00:29:43.610047 containerd[1886]: 2025-11-06 00:29:42.943 [INFO][4881] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1" HandleID="k8s-pod-network.57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1" Workload="ip--172--31--28--191-k8s-csi--node--driver--tw7sh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c0960), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-191", "pod":"csi-node-driver-tw7sh", "timestamp":"2025-11-06 00:29:42.937518938 +0000 UTC"}, Hostname:"ip-172-31-28-191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:29:43.610047 containerd[1886]: 2025-11-06 00:29:42.943 [INFO][4881] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:29:43.610047 containerd[1886]: 2025-11-06 00:29:43.342 [INFO][4881] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:29:43.610047 containerd[1886]: 2025-11-06 00:29:43.342 [INFO][4881] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-191' Nov 6 00:29:43.610047 containerd[1886]: 2025-11-06 00:29:43.388 [INFO][4881] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1" host="ip-172-31-28-191" Nov 6 00:29:43.610047 containerd[1886]: 2025-11-06 00:29:43.407 [INFO][4881] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-191" Nov 6 00:29:43.610047 containerd[1886]: 2025-11-06 00:29:43.424 [INFO][4881] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="ip-172-31-28-191" Nov 6 00:29:43.610047 containerd[1886]: 2025-11-06 00:29:43.433 [INFO][4881] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="ip-172-31-28-191" Nov 6 00:29:43.610047 containerd[1886]: 2025-11-06 00:29:43.450 [INFO][4881] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ip-172-31-28-191" Nov 6 00:29:43.610619 containerd[1886]: 2025-11-06 00:29:43.450 [INFO][4881] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1" host="ip-172-31-28-191" Nov 6 00:29:43.610619 containerd[1886]: 2025-11-06 00:29:43.456 [INFO][4881] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1 Nov 6 00:29:43.610619 containerd[1886]: 2025-11-06 00:29:43.469 [INFO][4881] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1" host="ip-172-31-28-191" Nov 6 00:29:43.610619 containerd[1886]: 2025-11-06 00:29:43.488 [INFO][4881] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.68/26] block=192.168.126.64/26 handle="k8s-pod-network.57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1" host="ip-172-31-28-191" Nov 6 00:29:43.610619 containerd[1886]: 2025-11-06 00:29:43.489 [INFO][4881] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.68/26] handle="k8s-pod-network.57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1" host="ip-172-31-28-191" Nov 6 00:29:43.610619 containerd[1886]: 2025-11-06 00:29:43.489 [INFO][4881] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:29:43.610619 containerd[1886]: 2025-11-06 00:29:43.489 [INFO][4881] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.68/26] IPv6=[] ContainerID="57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1" HandleID="k8s-pod-network.57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1" Workload="ip--172--31--28--191-k8s-csi--node--driver--tw7sh-eth0" Nov 6 00:29:43.614631 containerd[1886]: 2025-11-06 00:29:43.501 [INFO][4868] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1" Namespace="calico-system" Pod="csi-node-driver-tw7sh" WorkloadEndpoint="ip--172--31--28--191-k8s-csi--node--driver--tw7sh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--191-k8s-csi--node--driver--tw7sh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cdd556f5-82eb-470d-88d2-246c63940429", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 29, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-191", ContainerID:"", Pod:"csi-node-driver-tw7sh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califdae6b45193", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:29:43.614799 containerd[1886]: 2025-11-06 00:29:43.503 [INFO][4868] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.68/32] ContainerID="57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1" Namespace="calico-system" Pod="csi-node-driver-tw7sh" WorkloadEndpoint="ip--172--31--28--191-k8s-csi--node--driver--tw7sh-eth0" Nov 6 00:29:43.614799 containerd[1886]: 2025-11-06 00:29:43.504 [INFO][4868] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califdae6b45193 ContainerID="57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1" Namespace="calico-system" Pod="csi-node-driver-tw7sh" WorkloadEndpoint="ip--172--31--28--191-k8s-csi--node--driver--tw7sh-eth0" Nov 6 00:29:43.614799 containerd[1886]: 2025-11-06 00:29:43.564 [INFO][4868] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1" Namespace="calico-system" Pod="csi-node-driver-tw7sh" WorkloadEndpoint="ip--172--31--28--191-k8s-csi--node--driver--tw7sh-eth0" Nov 6 00:29:43.614994 containerd[1886]: 2025-11-06 00:29:43.567 [INFO][4868] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1" Namespace="calico-system" Pod="csi-node-driver-tw7sh" WorkloadEndpoint="ip--172--31--28--191-k8s-csi--node--driver--tw7sh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--191-k8s-csi--node--driver--tw7sh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cdd556f5-82eb-470d-88d2-246c63940429", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 29, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-191", ContainerID:"57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1", Pod:"csi-node-driver-tw7sh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califdae6b45193", MAC:"8a:d3:e8:34:51:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:29:43.615105 containerd[1886]: 2025-11-06 00:29:43.596 [INFO][4868] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1" Namespace="calico-system" Pod="csi-node-driver-tw7sh" WorkloadEndpoint="ip--172--31--28--191-k8s-csi--node--driver--tw7sh-eth0" Nov 6 00:29:43.663299 containerd[1886]: time="2025-11-06T00:29:43.662884807Z" level=info msg="connecting to shim 2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9" address="unix:///run/containerd/s/92dc9c94e07045350ab49128ed535f4770a6b7c1e369fa0740097192104f231d" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:29:43.702773 containerd[1886]: time="2025-11-06T00:29:43.702665863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jvgwl,Uid:4f7ba5c7-8389-4129-a47c-5c1f9c7a7f26,Namespace:kube-system,Attempt:0,}" Nov 6 00:29:43.718741 containerd[1886]: time="2025-11-06T00:29:43.717953188Z" level=info msg="connecting to shim 3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d" address="unix:///run/containerd/s/380a5d8e0cb23b3f3b707b29cd8f48cd21c9b0cda97fab8c0229fb71e2137476" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:29:43.724519 systemd-networkd[1803]: calibad62080fbd: Link UP Nov 6 00:29:43.728334 systemd-networkd[1803]: calibad62080fbd: Gained carrier Nov 6 00:29:43.781886 containerd[1886]: time="2025-11-06T00:29:43.781830979Z" level=info msg="connecting to shim fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231" address="unix:///run/containerd/s/acc23c9822a758572e70cf58f4c3e8f8ba73ff540946d41c9da179efd543d0b8" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:29:43.843773 containerd[1886]: 2025-11-06 00:29:41.796 [INFO][4951] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--191-k8s-calico--kube--controllers--645cfdc79b--jfhrj-eth0 calico-kube-controllers-645cfdc79b- calico-system 07d203b7-097a-40f5-a623-e80d0cafaabf 848 0 2025-11-06 00:29:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:645cfdc79b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-28-191 calico-kube-controllers-645cfdc79b-jfhrj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibad62080fbd [] [] }} ContainerID="518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6" Namespace="calico-system" Pod="calico-kube-controllers-645cfdc79b-jfhrj" WorkloadEndpoint="ip--172--31--28--191-k8s-calico--kube--controllers--645cfdc79b--jfhrj-" Nov 6 00:29:43.843773 containerd[1886]: 2025-11-06 00:29:41.796 [INFO][4951] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6" Namespace="calico-system" Pod="calico-kube-controllers-645cfdc79b-jfhrj" WorkloadEndpoint="ip--172--31--28--191-k8s-calico--kube--controllers--645cfdc79b--jfhrj-eth0" Nov 6 00:29:43.843773 containerd[1886]: 2025-11-06 00:29:42.942 [INFO][4968] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6" HandleID="k8s-pod-network.518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6" Workload="ip--172--31--28--191-k8s-calico--kube--controllers--645cfdc79b--jfhrj-eth0" Nov 6 00:29:43.844943 containerd[1886]: 2025-11-06 00:29:42.943 [INFO][4968] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6" HandleID="k8s-pod-network.518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6" Workload="ip--172--31--28--191-k8s-calico--kube--controllers--645cfdc79b--jfhrj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000322350), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-191", "pod":"calico-kube-controllers-645cfdc79b-jfhrj", "timestamp":"2025-11-06 00:29:42.942820656 +0000 UTC"}, Hostname:"ip-172-31-28-191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:29:43.844943 containerd[1886]: 2025-11-06 00:29:42.943 [INFO][4968] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:29:43.844943 containerd[1886]: 2025-11-06 00:29:43.489 [INFO][4968] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:29:43.844943 containerd[1886]: 2025-11-06 00:29:43.489 [INFO][4968] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-191' Nov 6 00:29:43.844943 containerd[1886]: 2025-11-06 00:29:43.526 [INFO][4968] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6" host="ip-172-31-28-191" Nov 6 00:29:43.844943 containerd[1886]: 2025-11-06 00:29:43.547 [INFO][4968] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-191" Nov 6 00:29:43.844943 containerd[1886]: 2025-11-06 00:29:43.567 [INFO][4968] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="ip-172-31-28-191" Nov 6 00:29:43.844943 containerd[1886]: 2025-11-06 00:29:43.578 [INFO][4968] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="ip-172-31-28-191" Nov 6 00:29:43.844943 containerd[1886]: 2025-11-06 00:29:43.596 [INFO][4968] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ip-172-31-28-191" Nov 6 00:29:43.845713 containerd[1886]: 2025-11-06 00:29:43.597 [INFO][4968] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6" host="ip-172-31-28-191" Nov 6 00:29:43.845713 containerd[1886]: 2025-11-06 00:29:43.604 [INFO][4968] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6 Nov 6 00:29:43.845713 containerd[1886]: 2025-11-06 00:29:43.618 [INFO][4968] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6" host="ip-172-31-28-191" Nov 6 00:29:43.845713 containerd[1886]: 2025-11-06 00:29:43.657 [INFO][4968] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.69/26] block=192.168.126.64/26 handle="k8s-pod-network.518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6" host="ip-172-31-28-191" Nov 6 00:29:43.845713 containerd[1886]: 2025-11-06 00:29:43.657 [INFO][4968] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.69/26] handle="k8s-pod-network.518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6" host="ip-172-31-28-191" Nov 6 00:29:43.845713 containerd[1886]: 2025-11-06 00:29:43.657 [INFO][4968] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:29:43.845713 containerd[1886]: 2025-11-06 00:29:43.658 [INFO][4968] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.69/26] IPv6=[] ContainerID="518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6" HandleID="k8s-pod-network.518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6" Workload="ip--172--31--28--191-k8s-calico--kube--controllers--645cfdc79b--jfhrj-eth0" Nov 6 00:29:43.846034 containerd[1886]: 2025-11-06 00:29:43.674 [INFO][4951] cni-plugin/k8s.go 418: Populated endpoint ContainerID="518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6" Namespace="calico-system" Pod="calico-kube-controllers-645cfdc79b-jfhrj" WorkloadEndpoint="ip--172--31--28--191-k8s-calico--kube--controllers--645cfdc79b--jfhrj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--191-k8s-calico--kube--controllers--645cfdc79b--jfhrj-eth0", GenerateName:"calico-kube-controllers-645cfdc79b-", Namespace:"calico-system", SelfLink:"", UID:"07d203b7-097a-40f5-a623-e80d0cafaabf", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 29, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"645cfdc79b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-191", ContainerID:"", Pod:"calico-kube-controllers-645cfdc79b-jfhrj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibad62080fbd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:29:43.846138 containerd[1886]: 2025-11-06 00:29:43.679 [INFO][4951] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.69/32] ContainerID="518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6" Namespace="calico-system" Pod="calico-kube-controllers-645cfdc79b-jfhrj" WorkloadEndpoint="ip--172--31--28--191-k8s-calico--kube--controllers--645cfdc79b--jfhrj-eth0" Nov 6 00:29:43.846138 containerd[1886]: 2025-11-06 00:29:43.682 [INFO][4951] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibad62080fbd ContainerID="518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6" Namespace="calico-system" Pod="calico-kube-controllers-645cfdc79b-jfhrj" WorkloadEndpoint="ip--172--31--28--191-k8s-calico--kube--controllers--645cfdc79b--jfhrj-eth0" Nov 6 00:29:43.846138 containerd[1886]: 2025-11-06 00:29:43.724 [INFO][4951] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6" Namespace="calico-system" Pod="calico-kube-controllers-645cfdc79b-jfhrj" WorkloadEndpoint="ip--172--31--28--191-k8s-calico--kube--controllers--645cfdc79b--jfhrj-eth0" Nov 6 00:29:43.846850 containerd[1886]: 2025-11-06 00:29:43.766 [INFO][4951] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6" Namespace="calico-system" Pod="calico-kube-controllers-645cfdc79b-jfhrj" WorkloadEndpoint="ip--172--31--28--191-k8s-calico--kube--controllers--645cfdc79b--jfhrj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--191-k8s-calico--kube--controllers--645cfdc79b--jfhrj-eth0", GenerateName:"calico-kube-controllers-645cfdc79b-", Namespace:"calico-system", SelfLink:"", UID:"07d203b7-097a-40f5-a623-e80d0cafaabf", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 29, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"645cfdc79b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-191", ContainerID:"518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6", Pod:"calico-kube-controllers-645cfdc79b-jfhrj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibad62080fbd", MAC:"a2:05:13:ac:97:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:29:43.846996 containerd[1886]: 2025-11-06 00:29:43.810 [INFO][4951] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6" Namespace="calico-system" Pod="calico-kube-controllers-645cfdc79b-jfhrj" WorkloadEndpoint="ip--172--31--28--191-k8s-calico--kube--controllers--645cfdc79b--jfhrj-eth0" Nov 6 00:29:43.895838 systemd[1]: Started cri-containerd-2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9.scope - libcontainer container 2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9. Nov 6 00:29:43.987952 systemd-networkd[1803]: cali85762c41cc5: Link UP Nov 6 00:29:43.994624 containerd[1886]: time="2025-11-06T00:29:43.994564956Z" level=info msg="connecting to shim 57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1" address="unix:///run/containerd/s/51df3816daa2b1a42c97d9561651c863382747210c22409d40f6bfc0aef25c33" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:29:44.003647 systemd-networkd[1803]: cali85762c41cc5: Gained carrier Nov 6 00:29:44.052827 containerd[1886]: 2025-11-06 00:29:41.847 [INFO][4942] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--nb8z8-eth0 calico-apiserver-64c94866d7- calico-apiserver 7bcd5d84-f469-41bd-a70e-01d6d2e8ee36 847 0 2025-11-06 00:29:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64c94866d7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-191 calico-apiserver-64c94866d7-nb8z8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali85762c41cc5 [] [] }} ContainerID="859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6" Namespace="calico-apiserver" Pod="calico-apiserver-64c94866d7-nb8z8" WorkloadEndpoint="ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--nb8z8-" Nov 6 00:29:44.052827 containerd[1886]: 2025-11-06 00:29:41.847 [INFO][4942] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6" Namespace="calico-apiserver" Pod="calico-apiserver-64c94866d7-nb8z8" WorkloadEndpoint="ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--nb8z8-eth0" Nov 6 00:29:44.052827 containerd[1886]: 2025-11-06 00:29:42.943 [INFO][4974] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6" HandleID="k8s-pod-network.859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6" Workload="ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--nb8z8-eth0" Nov 6 00:29:44.054770 containerd[1886]: 2025-11-06 00:29:42.944 [INFO][4974] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6" HandleID="k8s-pod-network.859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6" Workload="ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--nb8z8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f4f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-191", "pod":"calico-apiserver-64c94866d7-nb8z8", "timestamp":"2025-11-06 00:29:42.943964725 +0000 UTC"}, Hostname:"ip-172-31-28-191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:29:44.054770 containerd[1886]: 2025-11-06 00:29:42.944 [INFO][4974] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:29:44.054770 containerd[1886]: 2025-11-06 00:29:43.657 [INFO][4974] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:29:44.054770 containerd[1886]: 2025-11-06 00:29:43.659 [INFO][4974] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-191' Nov 6 00:29:44.054770 containerd[1886]: 2025-11-06 00:29:43.754 [INFO][4974] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6" host="ip-172-31-28-191" Nov 6 00:29:44.054770 containerd[1886]: 2025-11-06 00:29:43.798 [INFO][4974] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-191" Nov 6 00:29:44.054770 containerd[1886]: 2025-11-06 00:29:43.835 [INFO][4974] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="ip-172-31-28-191" Nov 6 00:29:44.054770 containerd[1886]: 2025-11-06 00:29:43.838 [INFO][4974] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="ip-172-31-28-191" Nov 6 00:29:44.054770 containerd[1886]: 2025-11-06 00:29:43.843 [INFO][4974] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ip-172-31-28-191" Nov 6 00:29:44.055165 containerd[1886]: 2025-11-06 00:29:43.843 [INFO][4974] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6" host="ip-172-31-28-191" Nov 6 00:29:44.055165 containerd[1886]: 2025-11-06 00:29:43.847 [INFO][4974] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6 Nov 6 00:29:44.055165 containerd[1886]: 2025-11-06 00:29:43.859 [INFO][4974] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6" host="ip-172-31-28-191" Nov 6 00:29:44.055165 containerd[1886]: 2025-11-06 00:29:43.875 [INFO][4974] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.70/26] block=192.168.126.64/26 handle="k8s-pod-network.859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6" host="ip-172-31-28-191" Nov 6 00:29:44.055165 containerd[1886]: 2025-11-06 00:29:43.875 [INFO][4974] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.70/26] handle="k8s-pod-network.859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6" host="ip-172-31-28-191" Nov 6 00:29:44.055165 containerd[1886]: 2025-11-06 00:29:43.875 [INFO][4974] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:29:44.055165 containerd[1886]: 2025-11-06 00:29:43.875 [INFO][4974] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.70/26] IPv6=[] ContainerID="859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6" HandleID="k8s-pod-network.859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6" Workload="ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--nb8z8-eth0" Nov 6 00:29:44.055442 containerd[1886]: 2025-11-06 00:29:43.912 [INFO][4942] cni-plugin/k8s.go 418: Populated endpoint ContainerID="859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6" Namespace="calico-apiserver" Pod="calico-apiserver-64c94866d7-nb8z8" WorkloadEndpoint="ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--nb8z8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--nb8z8-eth0", GenerateName:"calico-apiserver-64c94866d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"7bcd5d84-f469-41bd-a70e-01d6d2e8ee36", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 29, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c94866d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-191", ContainerID:"", Pod:"calico-apiserver-64c94866d7-nb8z8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali85762c41cc5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:29:44.055543 containerd[1886]: 2025-11-06 00:29:43.912 [INFO][4942] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.70/32] ContainerID="859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6" Namespace="calico-apiserver" Pod="calico-apiserver-64c94866d7-nb8z8" WorkloadEndpoint="ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--nb8z8-eth0" Nov 6 00:29:44.055543 containerd[1886]: 2025-11-06 00:29:43.912 [INFO][4942] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85762c41cc5 ContainerID="859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6" Namespace="calico-apiserver" Pod="calico-apiserver-64c94866d7-nb8z8" WorkloadEndpoint="ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--nb8z8-eth0" Nov 6 00:29:44.055543 containerd[1886]: 2025-11-06 00:29:44.002 [INFO][4942] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6" Namespace="calico-apiserver" Pod="calico-apiserver-64c94866d7-nb8z8" WorkloadEndpoint="ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--nb8z8-eth0" Nov 6 00:29:44.058432 containerd[1886]: 2025-11-06 00:29:44.004 [INFO][4942] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6" Namespace="calico-apiserver" Pod="calico-apiserver-64c94866d7-nb8z8" WorkloadEndpoint="ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--nb8z8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--nb8z8-eth0", GenerateName:"calico-apiserver-64c94866d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"7bcd5d84-f469-41bd-a70e-01d6d2e8ee36", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 29, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c94866d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-191", ContainerID:"859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6", Pod:"calico-apiserver-64c94866d7-nb8z8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali85762c41cc5", MAC:"ae:63:6d:7c:76:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:29:44.058912 containerd[1886]: 2025-11-06 00:29:44.036 [INFO][4942] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6" Namespace="calico-apiserver" Pod="calico-apiserver-64c94866d7-nb8z8" WorkloadEndpoint="ip--172--31--28--191-k8s-calico--apiserver--64c94866d7--nb8z8-eth0" Nov 6 00:29:44.093994 containerd[1886]: time="2025-11-06T00:29:44.091267254Z" level=info msg="connecting to shim 518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6" address="unix:///run/containerd/s/cc2a4b0be6f4f5cd1ce7ab3cf4239182304d946c7ed106e4431f521ee242dd36" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:29:44.091527 systemd[1]: Started cri-containerd-3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d.scope - libcontainer container 3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d. Nov 6 00:29:44.112995 systemd-networkd[1803]: cali19da6d63f65: Link UP Nov 6 00:29:44.118373 systemd-networkd[1803]: cali19da6d63f65: Gained carrier Nov 6 00:29:44.148980 systemd[1]: Started cri-containerd-fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231.scope - libcontainer container fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231. Nov 6 00:29:44.188303 containerd[1886]: 2025-11-06 00:29:40.790 [INFO][4897] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--191-k8s-coredns--674b8bbfcf--pcjbp-eth0 coredns-674b8bbfcf- kube-system fe4f3725-bf7b-43bf-9213-c207f9e2057d 849 0 2025-11-06 00:28:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-191 coredns-674b8bbfcf-pcjbp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali19da6d63f65 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b" Namespace="kube-system" Pod="coredns-674b8bbfcf-pcjbp" WorkloadEndpoint="ip--172--31--28--191-k8s-coredns--674b8bbfcf--pcjbp-" Nov 6 00:29:44.188303 containerd[1886]: 2025-11-06 00:29:40.791 [INFO][4897] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b" Namespace="kube-system" Pod="coredns-674b8bbfcf-pcjbp" WorkloadEndpoint="ip--172--31--28--191-k8s-coredns--674b8bbfcf--pcjbp-eth0" Nov 6 00:29:44.188303 containerd[1886]: 2025-11-06 00:29:42.944 [INFO][4925] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b" HandleID="k8s-pod-network.b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b" Workload="ip--172--31--28--191-k8s-coredns--674b8bbfcf--pcjbp-eth0" Nov 6 00:29:44.189878 containerd[1886]: 2025-11-06 00:29:42.946 [INFO][4925] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b" HandleID="k8s-pod-network.b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b" Workload="ip--172--31--28--191-k8s-coredns--674b8bbfcf--pcjbp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037a350), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-191", "pod":"coredns-674b8bbfcf-pcjbp", "timestamp":"2025-11-06 00:29:42.94487218 +0000 UTC"}, Hostname:"ip-172-31-28-191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:29:44.189878 containerd[1886]: 2025-11-06 00:29:42.946 [INFO][4925] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:29:44.189878 containerd[1886]: 2025-11-06 00:29:43.875 [INFO][4925] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:29:44.189878 containerd[1886]: 2025-11-06 00:29:43.876 [INFO][4925] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-191' Nov 6 00:29:44.189878 containerd[1886]: 2025-11-06 00:29:43.899 [INFO][4925] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b" host="ip-172-31-28-191" Nov 6 00:29:44.189878 containerd[1886]: 2025-11-06 00:29:43.925 [INFO][4925] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-191" Nov 6 00:29:44.189878 containerd[1886]: 2025-11-06 00:29:43.951 [INFO][4925] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="ip-172-31-28-191" Nov 6 00:29:44.189878 containerd[1886]: 2025-11-06 00:29:43.960 [INFO][4925] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="ip-172-31-28-191" Nov 6 00:29:44.189878 containerd[1886]: 2025-11-06 00:29:43.969 [INFO][4925] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ip-172-31-28-191" Nov 6 00:29:44.190243 containerd[1886]: 2025-11-06 00:29:43.969 [INFO][4925] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b" host="ip-172-31-28-191" Nov 6 00:29:44.190243 containerd[1886]: 2025-11-06 00:29:43.978 [INFO][4925] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b Nov 6 00:29:44.190243 containerd[1886]: 2025-11-06 00:29:43.995 [INFO][4925] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b" host="ip-172-31-28-191" Nov 6 00:29:44.190243 containerd[1886]: 2025-11-06 00:29:44.038 [INFO][4925] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.71/26] block=192.168.126.64/26 handle="k8s-pod-network.b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b" host="ip-172-31-28-191" Nov 6 00:29:44.190243 containerd[1886]: 2025-11-06 00:29:44.038 [INFO][4925] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.71/26] handle="k8s-pod-network.b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b" host="ip-172-31-28-191" Nov 6 00:29:44.190243 containerd[1886]: 2025-11-06 00:29:44.038 [INFO][4925] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:29:44.190243 containerd[1886]: 2025-11-06 00:29:44.038 [INFO][4925] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.71/26] IPv6=[] ContainerID="b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b" HandleID="k8s-pod-network.b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b" Workload="ip--172--31--28--191-k8s-coredns--674b8bbfcf--pcjbp-eth0" Nov 6 00:29:44.190502 containerd[1886]: 2025-11-06 00:29:44.061 [INFO][4897] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b" Namespace="kube-system" Pod="coredns-674b8bbfcf-pcjbp" WorkloadEndpoint="ip--172--31--28--191-k8s-coredns--674b8bbfcf--pcjbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--191-k8s-coredns--674b8bbfcf--pcjbp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fe4f3725-bf7b-43bf-9213-c207f9e2057d", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 28, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-191", ContainerID:"", Pod:"coredns-674b8bbfcf-pcjbp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali19da6d63f65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:29:44.190502 containerd[1886]: 2025-11-06 00:29:44.062 [INFO][4897] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.71/32] ContainerID="b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b" Namespace="kube-system" Pod="coredns-674b8bbfcf-pcjbp" WorkloadEndpoint="ip--172--31--28--191-k8s-coredns--674b8bbfcf--pcjbp-eth0" Nov 6 00:29:44.190502 containerd[1886]: 2025-11-06 00:29:44.062 [INFO][4897] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19da6d63f65 ContainerID="b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b" Namespace="kube-system" Pod="coredns-674b8bbfcf-pcjbp" WorkloadEndpoint="ip--172--31--28--191-k8s-coredns--674b8bbfcf--pcjbp-eth0" Nov 6 00:29:44.190502 containerd[1886]: 2025-11-06 00:29:44.121 [INFO][4897] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b" Namespace="kube-system" Pod="coredns-674b8bbfcf-pcjbp" WorkloadEndpoint="ip--172--31--28--191-k8s-coredns--674b8bbfcf--pcjbp-eth0" Nov 6 00:29:44.190502 containerd[1886]: 2025-11-06 00:29:44.140 [INFO][4897] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b" Namespace="kube-system" Pod="coredns-674b8bbfcf-pcjbp" WorkloadEndpoint="ip--172--31--28--191-k8s-coredns--674b8bbfcf--pcjbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--191-k8s-coredns--674b8bbfcf--pcjbp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fe4f3725-bf7b-43bf-9213-c207f9e2057d", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 28, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-191", ContainerID:"b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b", Pod:"coredns-674b8bbfcf-pcjbp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali19da6d63f65", MAC:"f2:9c:3f:0f:82:b0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:29:44.190502 containerd[1886]: 2025-11-06 00:29:44.179 [INFO][4897] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b" Namespace="kube-system" Pod="coredns-674b8bbfcf-pcjbp" WorkloadEndpoint="ip--172--31--28--191-k8s-coredns--674b8bbfcf--pcjbp-eth0" Nov 6 00:29:44.222893 systemd[1]: Started cri-containerd-518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6.scope - libcontainer container 518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6. Nov 6 00:29:44.227393 systemd[1]: Started cri-containerd-57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1.scope - libcontainer container 57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1. Nov 6 00:29:44.343177 systemd-networkd[1803]: cali104d7af2e2f: Gained IPv6LL Nov 6 00:29:44.348288 containerd[1886]: time="2025-11-06T00:29:44.345564308Z" level=info msg="connecting to shim 859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6" address="unix:///run/containerd/s/2226b6b40ed410f9f41b35494ef51663668c60eeb8ac10031dab9a2a6df15d54" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:29:44.376612 containerd[1886]: time="2025-11-06T00:29:44.374539947Z" level=info msg="connecting to shim b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b" address="unix:///run/containerd/s/49a9694f33b85379ba88743c28cb65e60f9ee2bc83936c9a5f9cf693428393c7" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:29:44.457181 systemd[1]: Started cri-containerd-b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b.scope - libcontainer container b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b. Nov 6 00:29:44.499530 containerd[1886]: time="2025-11-06T00:29:44.499054096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-h7sr5,Uid:a5bb8dc2-5212-45e9-b678-f5085dd45c44,Namespace:calico-system,Attempt:0,} returns sandbox id \"2e1a1769d52941ab1c045c8551d847caf6cb85339b9dc98c2d33aa6dd0d747a9\"" Nov 6 00:29:44.570032 systemd[1]: Started cri-containerd-859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6.scope - libcontainer container 859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6. Nov 6 00:29:44.596170 containerd[1886]: time="2025-11-06T00:29:44.596030149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:29:44.721802 systemd-networkd[1803]: cali5cc3b627c17: Link UP Nov 6 00:29:44.723946 systemd-networkd[1803]: cali5cc3b627c17: Gained carrier Nov 6 00:29:44.745730 containerd[1886]: time="2025-11-06T00:29:44.745686833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pcjbp,Uid:fe4f3725-bf7b-43bf-9213-c207f9e2057d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b\"" Nov 6 00:29:44.792721 containerd[1886]: time="2025-11-06T00:29:44.792679954Z" level=info msg="CreateContainer within sandbox \"b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:29:44.813061 containerd[1886]: 2025-11-06 00:29:44.138 [INFO][5093] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--191-k8s-coredns--674b8bbfcf--jvgwl-eth0 coredns-674b8bbfcf- kube-system 4f7ba5c7-8389-4129-a47c-5c1f9c7a7f26 853 0 2025-11-06 00:28:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-191 coredns-674b8bbfcf-jvgwl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5cc3b627c17 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvgwl" WorkloadEndpoint="ip--172--31--28--191-k8s-coredns--674b8bbfcf--jvgwl-" Nov 6 00:29:44.813061 containerd[1886]: 2025-11-06 00:29:44.141 [INFO][5093] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvgwl" WorkloadEndpoint="ip--172--31--28--191-k8s-coredns--674b8bbfcf--jvgwl-eth0" Nov 6 00:29:44.813061 containerd[1886]: 2025-11-06 00:29:44.573 [INFO][5239] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7" HandleID="k8s-pod-network.99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7" Workload="ip--172--31--28--191-k8s-coredns--674b8bbfcf--jvgwl-eth0" Nov 6 00:29:44.813061 containerd[1886]: 2025-11-06 00:29:44.578 [INFO][5239] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7" HandleID="k8s-pod-network.99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7" Workload="ip--172--31--28--191-k8s-coredns--674b8bbfcf--jvgwl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fcc0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-191", "pod":"coredns-674b8bbfcf-jvgwl", "timestamp":"2025-11-06 00:29:44.573079272 +0000 UTC"}, Hostname:"ip-172-31-28-191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:29:44.813061 containerd[1886]: 2025-11-06 00:29:44.578 [INFO][5239] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:29:44.813061 containerd[1886]: 2025-11-06 00:29:44.578 [INFO][5239] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:29:44.813061 containerd[1886]: 2025-11-06 00:29:44.579 [INFO][5239] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-191' Nov 6 00:29:44.813061 containerd[1886]: 2025-11-06 00:29:44.595 [INFO][5239] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7" host="ip-172-31-28-191" Nov 6 00:29:44.813061 containerd[1886]: 2025-11-06 00:29:44.609 [INFO][5239] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-191" Nov 6 00:29:44.813061 containerd[1886]: 2025-11-06 00:29:44.622 [INFO][5239] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="ip-172-31-28-191" Nov 6 00:29:44.813061 containerd[1886]: 2025-11-06 00:29:44.625 [INFO][5239] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="ip-172-31-28-191" Nov 6 00:29:44.813061 containerd[1886]: 2025-11-06 00:29:44.631 [INFO][5239] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="ip-172-31-28-191" Nov 6 00:29:44.813061 containerd[1886]: 2025-11-06 00:29:44.631 [INFO][5239] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7" host="ip-172-31-28-191" Nov 6 00:29:44.813061 containerd[1886]: 2025-11-06 00:29:44.635 [INFO][5239] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7 Nov 6 00:29:44.813061 containerd[1886]: 2025-11-06 00:29:44.648 [INFO][5239] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7" host="ip-172-31-28-191" Nov 6 00:29:44.813061 containerd[1886]: 2025-11-06 00:29:44.677 [INFO][5239] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.72/26] block=192.168.126.64/26 handle="k8s-pod-network.99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7" host="ip-172-31-28-191" Nov 6 00:29:44.813061 containerd[1886]: 2025-11-06 00:29:44.682 [INFO][5239] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.72/26] handle="k8s-pod-network.99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7" host="ip-172-31-28-191" Nov 6 00:29:44.813061 containerd[1886]: 2025-11-06 00:29:44.682 [INFO][5239] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:29:44.813061 containerd[1886]: 2025-11-06 00:29:44.682 [INFO][5239] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.72/26] IPv6=[] ContainerID="99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7" HandleID="k8s-pod-network.99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7" Workload="ip--172--31--28--191-k8s-coredns--674b8bbfcf--jvgwl-eth0" Nov 6 00:29:44.814528 containerd[1886]: 2025-11-06 00:29:44.715 [INFO][5093] cni-plugin/k8s.go 418: Populated endpoint ContainerID="99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvgwl" WorkloadEndpoint="ip--172--31--28--191-k8s-coredns--674b8bbfcf--jvgwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--191-k8s-coredns--674b8bbfcf--jvgwl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4f7ba5c7-8389-4129-a47c-5c1f9c7a7f26", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 28, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-191", ContainerID:"", Pod:"coredns-674b8bbfcf-jvgwl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5cc3b627c17", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:29:44.814528 containerd[1886]: 2025-11-06 00:29:44.715 [INFO][5093] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.72/32] ContainerID="99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvgwl" WorkloadEndpoint="ip--172--31--28--191-k8s-coredns--674b8bbfcf--jvgwl-eth0" Nov 6 00:29:44.814528 containerd[1886]: 2025-11-06 00:29:44.716 [INFO][5093] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5cc3b627c17 ContainerID="99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvgwl" WorkloadEndpoint="ip--172--31--28--191-k8s-coredns--674b8bbfcf--jvgwl-eth0" Nov 6 00:29:44.814528 containerd[1886]: 2025-11-06 00:29:44.722 [INFO][5093] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvgwl" WorkloadEndpoint="ip--172--31--28--191-k8s-coredns--674b8bbfcf--jvgwl-eth0" Nov 6 00:29:44.814528 containerd[1886]: 2025-11-06 00:29:44.723 [INFO][5093] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvgwl" WorkloadEndpoint="ip--172--31--28--191-k8s-coredns--674b8bbfcf--jvgwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--191-k8s-coredns--674b8bbfcf--jvgwl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4f7ba5c7-8389-4129-a47c-5c1f9c7a7f26", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 28, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-191", ContainerID:"99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7", Pod:"coredns-674b8bbfcf-jvgwl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5cc3b627c17", MAC:"96:90:09:16:4c:04", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:29:44.814528 containerd[1886]: 2025-11-06 00:29:44.786 [INFO][5093] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7" Namespace="kube-system" Pod="coredns-674b8bbfcf-jvgwl" WorkloadEndpoint="ip--172--31--28--191-k8s-coredns--674b8bbfcf--jvgwl-eth0" Nov 6 00:29:44.824878 containerd[1886]: time="2025-11-06T00:29:44.824682482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tw7sh,Uid:cdd556f5-82eb-470d-88d2-246c63940429,Namespace:calico-system,Attempt:0,} returns sandbox id \"57f90ec67bf72245b847c6682e92e1efda49bc4245e3bbdee4c5d912f1c0bdd1\"" Nov 6 00:29:44.841424 containerd[1886]: time="2025-11-06T00:29:44.841386892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645cfdc79b-jfhrj,Uid:07d203b7-097a-40f5-a623-e80d0cafaabf,Namespace:calico-system,Attempt:0,} returns sandbox id \"518ec295ae6b5ab4910ad9d3697d2d5a2eaceb7101910b19660206750cedb0c6\"" Nov 6 00:29:44.852666 containerd[1886]: time="2025-11-06T00:29:44.852454994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8658cddcb4-t8jhs,Uid:06272151-9588-4a91-b3be-275f9fb7fb76,Namespace:calico-system,Attempt:0,} returns sandbox id \"3a555fc996989e95005990d165ec478288995bbd06b3798180c9b61db7216f3d\"" Nov 6 00:29:44.854752 systemd-networkd[1803]: calibad62080fbd: Gained IPv6LL Nov 6 00:29:44.923113 containerd[1886]: time="2025-11-06T00:29:44.922470565Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:29:44.923244 containerd[1886]: time="2025-11-06T00:29:44.922900214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c94866d7-pzj7c,Uid:683394b5-a4c6-4d59-b702-aa09246c75cb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fc29d9a5871b70efd8f978dc11fc01a8743c0ddd10c513ff5b908bfb2c6b5231\"" Nov 6 00:29:44.924550 containerd[1886]: time="2025-11-06T00:29:44.923571612Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:29:44.924550 containerd[1886]: time="2025-11-06T00:29:44.924132898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:29:44.934132 containerd[1886]: time="2025-11-06T00:29:44.933983434Z" level=info msg="connecting to shim 99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7" address="unix:///run/containerd/s/e17c8818fc113abd98f7f17152573ab9c27d4a0804acec5c93a8cfa91935750f" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:29:44.946147 kubelet[3535]: E1106 00:29:44.946078 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:29:44.948132 kubelet[3535]: E1106 00:29:44.946161 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:29:44.951053 containerd[1886]: time="2025-11-06T00:29:44.950271349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:29:44.964270 containerd[1886]: time="2025-11-06T00:29:44.963541647Z" level=info msg="Container f7700bf27cc1a214cfa8b55c19d29d2ac0c5dd251032eb3025ccd39b9457ff81: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:29:44.965667 kubelet[3535]: E1106 00:29:44.965539 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-npwg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-h7sr5_calico-system(a5bb8dc2-5212-45e9-b678-f5085dd45c44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:29:44.974834 kubelet[3535]: E1106 00:29:44.974339 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h7sr5" podUID="a5bb8dc2-5212-45e9-b678-f5085dd45c44" Nov 6 00:29:44.983852 systemd-networkd[1803]: cali882487907ab: Gained IPv6LL Nov 6 00:29:44.995407 containerd[1886]: time="2025-11-06T00:29:44.995356602Z" level=info msg="CreateContainer within sandbox \"b5294aa4737a561b66654bebb34d3e5a90f56b9fdf552dbc6a97737215815b2b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f7700bf27cc1a214cfa8b55c19d29d2ac0c5dd251032eb3025ccd39b9457ff81\"" Nov 6 00:29:45.004607 containerd[1886]: time="2025-11-06T00:29:45.003994602Z" level=info msg="StartContainer for \"f7700bf27cc1a214cfa8b55c19d29d2ac0c5dd251032eb3025ccd39b9457ff81\"" Nov 6 00:29:45.009249 systemd[1]: Started cri-containerd-99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7.scope - libcontainer container 99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7. Nov 6 00:29:45.023319 containerd[1886]: time="2025-11-06T00:29:45.023270414Z" level=info msg="connecting to shim f7700bf27cc1a214cfa8b55c19d29d2ac0c5dd251032eb3025ccd39b9457ff81" address="unix:///run/containerd/s/49a9694f33b85379ba88743c28cb65e60f9ee2bc83936c9a5f9cf693428393c7" protocol=ttrpc version=3 Nov 6 00:29:45.046933 systemd-networkd[1803]: calie8f019e6ae3: Gained IPv6LL Nov 6 00:29:45.065942 kubelet[3535]: E1106 00:29:45.065449 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h7sr5" podUID="a5bb8dc2-5212-45e9-b678-f5085dd45c44" Nov 6 00:29:45.107690 systemd[1]: Started cri-containerd-f7700bf27cc1a214cfa8b55c19d29d2ac0c5dd251032eb3025ccd39b9457ff81.scope - libcontainer container f7700bf27cc1a214cfa8b55c19d29d2ac0c5dd251032eb3025ccd39b9457ff81. Nov 6 00:29:45.117517 containerd[1886]: time="2025-11-06T00:29:45.117436638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c94866d7-nb8z8,Uid:7bcd5d84-f469-41bd-a70e-01d6d2e8ee36,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"859f2e9a2bc7291d16c85976daf79413b57ef628c0ade4fb25113f72344bb2d6\"" Nov 6 00:29:45.169034 containerd[1886]: time="2025-11-06T00:29:45.168988433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jvgwl,Uid:4f7ba5c7-8389-4129-a47c-5c1f9c7a7f26,Namespace:kube-system,Attempt:0,} returns sandbox id \"99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7\"" Nov 6 00:29:45.176684 containerd[1886]: time="2025-11-06T00:29:45.176633674Z" level=info msg="CreateContainer within sandbox \"99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:29:45.186812 containerd[1886]: time="2025-11-06T00:29:45.186748664Z" level=info msg="Container ffe96d0870d518fa492b4cc00ba8d668bf0df32dcec013a533c1e72479e68637: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:29:45.194819 containerd[1886]: time="2025-11-06T00:29:45.194771347Z" level=info msg="CreateContainer within sandbox \"99302d906f183baa00324aa73d7ae7472148bd70b25a5d1a964e81acd58a5ba7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ffe96d0870d518fa492b4cc00ba8d668bf0df32dcec013a533c1e72479e68637\"" Nov 6 00:29:45.195838 containerd[1886]: time="2025-11-06T00:29:45.195783891Z" level=info msg="StartContainer for \"ffe96d0870d518fa492b4cc00ba8d668bf0df32dcec013a533c1e72479e68637\"" Nov 6 00:29:45.198324 containerd[1886]: time="2025-11-06T00:29:45.197765528Z" level=info msg="connecting to shim ffe96d0870d518fa492b4cc00ba8d668bf0df32dcec013a533c1e72479e68637" address="unix:///run/containerd/s/e17c8818fc113abd98f7f17152573ab9c27d4a0804acec5c93a8cfa91935750f" protocol=ttrpc version=3 Nov 6 00:29:45.216237 containerd[1886]: time="2025-11-06T00:29:45.216040901Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:29:45.217270 containerd[1886]: time="2025-11-06T00:29:45.217225903Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:29:45.217958 containerd[1886]: time="2025-11-06T00:29:45.217925832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:29:45.218637 kubelet[3535]: E1106 00:29:45.218531 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:29:45.218784 kubelet[3535]: E1106 00:29:45.218752 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:29:45.220092 kubelet[3535]: E1106 00:29:45.219769 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59f95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tw7sh_calico-system(cdd556f5-82eb-470d-88d2-246c63940429): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:29:45.224724 containerd[1886]: time="2025-11-06T00:29:45.224657860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:29:45.230160 systemd[1]: Started cri-containerd-ffe96d0870d518fa492b4cc00ba8d668bf0df32dcec013a533c1e72479e68637.scope - libcontainer container ffe96d0870d518fa492b4cc00ba8d668bf0df32dcec013a533c1e72479e68637. Nov 6 00:29:45.239665 systemd-networkd[1803]: califdae6b45193: Gained IPv6LL Nov 6 00:29:45.258518 containerd[1886]: time="2025-11-06T00:29:45.258448789Z" level=info msg="StartContainer for \"f7700bf27cc1a214cfa8b55c19d29d2ac0c5dd251032eb3025ccd39b9457ff81\" returns successfully" Nov 6 00:29:45.291747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount743940450.mount: Deactivated successfully. Nov 6 00:29:45.312443 containerd[1886]: time="2025-11-06T00:29:45.312350416Z" level=info msg="StartContainer for \"ffe96d0870d518fa492b4cc00ba8d668bf0df32dcec013a533c1e72479e68637\" returns successfully" Nov 6 00:29:45.478380 containerd[1886]: time="2025-11-06T00:29:45.478338148Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:29:45.479459 containerd[1886]: time="2025-11-06T00:29:45.479412467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:29:45.479693 containerd[1886]: time="2025-11-06T00:29:45.479421431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:29:45.479781 kubelet[3535]: E1106 00:29:45.479668 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:29:45.479781 kubelet[3535]: E1106 00:29:45.479765 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:29:45.480136 kubelet[3535]: E1106 00:29:45.480013 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xd652,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-645cfdc79b-jfhrj_calico-system(07d203b7-097a-40f5-a623-e80d0cafaabf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:29:45.480468 containerd[1886]: time="2025-11-06T00:29:45.480166470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:29:45.481905 kubelet[3535]: E1106 00:29:45.481779 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645cfdc79b-jfhrj" podUID="07d203b7-097a-40f5-a623-e80d0cafaabf" Nov 6 00:29:45.755088 containerd[1886]: time="2025-11-06T00:29:45.754902080Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:29:45.755959 containerd[1886]: time="2025-11-06T00:29:45.755893202Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:29:45.756164 containerd[1886]: time="2025-11-06T00:29:45.755973964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:29:45.756247 kubelet[3535]: E1106 00:29:45.756152 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:29:45.756247 kubelet[3535]: E1106 00:29:45.756196 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:29:45.756819 containerd[1886]: time="2025-11-06T00:29:45.756492777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:29:45.757100 kubelet[3535]: E1106 00:29:45.757011 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:399009cb1ccf418794e77c19f7d21413,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vsl8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8658cddcb4-t8jhs_calico-system(06272151-9588-4a91-b3be-275f9fb7fb76): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:29:45.942985 systemd-networkd[1803]: cali19da6d63f65: Gained IPv6LL Nov 6 00:29:46.009940 systemd-networkd[1803]: cali5cc3b627c17: Gained IPv6LL Nov 6 00:29:46.010818 systemd-networkd[1803]: cali85762c41cc5: Gained IPv6LL Nov 6 00:29:46.037615 containerd[1886]: time="2025-11-06T00:29:46.037494688Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:29:46.038913 containerd[1886]: time="2025-11-06T00:29:46.038825703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:29:46.039277 containerd[1886]: time="2025-11-06T00:29:46.038875767Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:29:46.040116 kubelet[3535]: E1106 00:29:46.040044 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:29:46.040813 kubelet[3535]: E1106 00:29:46.040132 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:29:46.040813 kubelet[3535]: E1106 00:29:46.040424 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5xsms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64c94866d7-pzj7c_calico-apiserver(683394b5-a4c6-4d59-b702-aa09246c75cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:29:46.042259 kubelet[3535]: E1106 00:29:46.041939 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c94866d7-pzj7c" podUID="683394b5-a4c6-4d59-b702-aa09246c75cb" Nov 6 00:29:46.042617 containerd[1886]: time="2025-11-06T00:29:46.042242529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:29:46.078509 kubelet[3535]: E1106 00:29:46.077202 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h7sr5" podUID="a5bb8dc2-5212-45e9-b678-f5085dd45c44" Nov 6 00:29:46.078509 kubelet[3535]: E1106 00:29:46.077733 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645cfdc79b-jfhrj" podUID="07d203b7-097a-40f5-a623-e80d0cafaabf" Nov 6 00:29:46.079359 kubelet[3535]: E1106 00:29:46.077811 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c94866d7-pzj7c" podUID="683394b5-a4c6-4d59-b702-aa09246c75cb" Nov 6 00:29:46.120464 kubelet[3535]: I1106 00:29:46.119539 3535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pcjbp" podStartSLOduration=49.119520298 podStartE2EDuration="49.119520298s" podCreationTimestamp="2025-11-06 00:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:29:46.088477335 +0000 UTC m=+54.619061505" watchObservedRunningTime="2025-11-06 00:29:46.119520298 +0000 UTC m=+54.650104469" Nov 6 00:29:46.232129 kubelet[3535]: I1106 00:29:46.232064 3535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jvgwl" podStartSLOduration=50.232042643 podStartE2EDuration="50.232042643s" podCreationTimestamp="2025-11-06 00:28:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:29:46.231812715 +0000 UTC m=+54.762396888" watchObservedRunningTime="2025-11-06 00:29:46.232042643 +0000 UTC m=+54.762626825" Nov 6 00:29:46.329096 containerd[1886]: time="2025-11-06T00:29:46.328235671Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:29:46.331660 containerd[1886]: time="2025-11-06T00:29:46.331617631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:29:46.332082 containerd[1886]: time="2025-11-06T00:29:46.331624392Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:29:46.332173 kubelet[3535]: E1106 00:29:46.331903 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:29:46.332173 kubelet[3535]: E1106 00:29:46.331960 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:29:46.332430 kubelet[3535]: E1106 00:29:46.332232 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pknzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64c94866d7-nb8z8_calico-apiserver(7bcd5d84-f469-41bd-a70e-01d6d2e8ee36): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:29:46.333038 containerd[1886]: time="2025-11-06T00:29:46.332847921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:29:46.334185 kubelet[3535]: E1106 00:29:46.334071 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c94866d7-nb8z8" podUID="7bcd5d84-f469-41bd-a70e-01d6d2e8ee36" Nov 6 00:29:46.475403 systemd[1]: Started sshd@10-172.31.28.191:22-147.75.109.163:42934.service - OpenSSH per-connection server daemon (147.75.109.163:42934). Nov 6 00:29:46.629811 containerd[1886]: time="2025-11-06T00:29:46.629393500Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:29:46.631779 containerd[1886]: time="2025-11-06T00:29:46.631608939Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:29:46.631779 containerd[1886]: time="2025-11-06T00:29:46.631611196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:29:46.632183 kubelet[3535]: E1106 00:29:46.632112 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:29:46.632183 kubelet[3535]: E1106 00:29:46.632165 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:29:46.633080 containerd[1886]: time="2025-11-06T00:29:46.633054996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:29:46.633195 kubelet[3535]: E1106 00:29:46.632819 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59f95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tw7sh_calico-system(cdd556f5-82eb-470d-88d2-246c63940429): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:29:46.634371 kubelet[3535]: E1106 00:29:46.634331 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tw7sh" podUID="cdd556f5-82eb-470d-88d2-246c63940429" Nov 6 00:29:46.681103 sshd[5548]: Accepted publickey for core from 147.75.109.163 port 42934 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:29:46.684684 sshd-session[5548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:46.691566 systemd-logind[1847]: New session 11 of user core. Nov 6 00:29:46.696811 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 00:29:46.892740 containerd[1886]: time="2025-11-06T00:29:46.892614433Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:29:46.894241 containerd[1886]: time="2025-11-06T00:29:46.894161298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:29:46.894241 containerd[1886]: time="2025-11-06T00:29:46.894205034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:29:46.895213 kubelet[3535]: E1106 00:29:46.894778 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:29:46.895213 kubelet[3535]: E1106 00:29:46.894830 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:29:46.895213 kubelet[3535]: E1106 00:29:46.894941 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vsl8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8658cddcb4-t8jhs_calico-system(06272151-9588-4a91-b3be-275f9fb7fb76): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:29:46.896861 kubelet[3535]: E1106 00:29:46.896565 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8658cddcb4-t8jhs" podUID="06272151-9588-4a91-b3be-275f9fb7fb76" Nov 6 00:29:47.078772 kubelet[3535]: E1106 00:29:47.078624 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c94866d7-nb8z8" podUID="7bcd5d84-f469-41bd-a70e-01d6d2e8ee36" Nov 6 00:29:47.080458 kubelet[3535]: E1106 00:29:47.080336 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tw7sh" podUID="cdd556f5-82eb-470d-88d2-246c63940429" Nov 6 00:29:47.081681 kubelet[3535]: E1106 00:29:47.081503 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8658cddcb4-t8jhs" podUID="06272151-9588-4a91-b3be-275f9fb7fb76" Nov 6 00:29:47.161627 sshd[5554]: Connection closed by 147.75.109.163 port 42934 Nov 6 00:29:47.163696 sshd-session[5548]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:47.169944 systemd[1]: sshd@10-172.31.28.191:22-147.75.109.163:42934.service: Deactivated successfully. Nov 6 00:29:47.175464 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 00:29:47.179064 systemd-logind[1847]: Session 11 logged out. Waiting for processes to exit. Nov 6 00:29:47.181019 systemd-logind[1847]: Removed session 11. Nov 6 00:29:48.441638 ntpd[2074]: Listen normally on 6 vxlan.calico 192.168.126.64:123 Nov 6 00:29:48.441694 ntpd[2074]: Listen normally on 7 vxlan.calico [fe80::64a9:faff:fe08:301e%4]:123 Nov 6 00:29:48.442820 ntpd[2074]: 6 Nov 00:29:48 ntpd[2074]: Listen normally on 6 vxlan.calico 192.168.126.64:123 Nov 6 00:29:48.442820 ntpd[2074]: 6 Nov 00:29:48 ntpd[2074]: Listen normally on 7 vxlan.calico [fe80::64a9:faff:fe08:301e%4]:123 Nov 6 00:29:48.442820 ntpd[2074]: 6 Nov 00:29:48 ntpd[2074]: Listen normally on 8 cali104d7af2e2f [fe80::ecee:eeff:feee:eeee%7]:123 Nov 6 00:29:48.442820 ntpd[2074]: 6 Nov 00:29:48 ntpd[2074]: Listen normally on 9 calie8f019e6ae3 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 6 00:29:48.442820 ntpd[2074]: 6 Nov 00:29:48 ntpd[2074]: Listen normally on 10 cali882487907ab [fe80::ecee:eeff:feee:eeee%9]:123 Nov 6 00:29:48.442820 ntpd[2074]: 6 Nov 00:29:48 ntpd[2074]: Listen normally on 11 califdae6b45193 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 6 00:29:48.442820 ntpd[2074]: 6 Nov 00:29:48 ntpd[2074]: Listen normally on 12 calibad62080fbd [fe80::ecee:eeff:feee:eeee%11]:123 Nov 6 00:29:48.442820 ntpd[2074]: 6 Nov 00:29:48 ntpd[2074]: Listen normally on 13 cali85762c41cc5 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 6 00:29:48.442820 ntpd[2074]: 6 Nov 00:29:48 ntpd[2074]: Listen normally on 14 cali19da6d63f65 [fe80::ecee:eeff:feee:eeee%13]:123 Nov 6 00:29:48.442820 ntpd[2074]: 6 Nov 00:29:48 ntpd[2074]: Listen normally on 15 cali5cc3b627c17 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 6 00:29:48.441718 ntpd[2074]: Listen normally on 8 cali104d7af2e2f [fe80::ecee:eeff:feee:eeee%7]:123 Nov 6 00:29:48.441738 ntpd[2074]: Listen normally on 9 calie8f019e6ae3 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 6 00:29:48.441757 ntpd[2074]: Listen normally on 10 cali882487907ab [fe80::ecee:eeff:feee:eeee%9]:123 Nov 6 00:29:48.441781 ntpd[2074]: Listen normally on 11 califdae6b45193 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 6 00:29:48.441800 ntpd[2074]: Listen normally on 12 calibad62080fbd [fe80::ecee:eeff:feee:eeee%11]:123 Nov 6 00:29:48.441823 ntpd[2074]: Listen normally on 13 cali85762c41cc5 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 6 00:29:48.441842 ntpd[2074]: Listen normally on 14 cali19da6d63f65 [fe80::ecee:eeff:feee:eeee%13]:123 Nov 6 00:29:48.441862 ntpd[2074]: Listen normally on 15 cali5cc3b627c17 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 6 00:29:52.200731 systemd[1]: Started sshd@11-172.31.28.191:22-147.75.109.163:36214.service - OpenSSH per-connection server daemon (147.75.109.163:36214). Nov 6 00:29:52.394474 sshd[5582]: Accepted publickey for core from 147.75.109.163 port 36214 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:29:52.398205 sshd-session[5582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:52.404276 systemd-logind[1847]: New session 12 of user core. Nov 6 00:29:52.413365 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 00:29:52.645286 sshd[5586]: Connection closed by 147.75.109.163 port 36214 Nov 6 00:29:52.645989 sshd-session[5582]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:52.650820 systemd-logind[1847]: Session 12 logged out. Waiting for processes to exit. Nov 6 00:29:52.651141 systemd[1]: sshd@11-172.31.28.191:22-147.75.109.163:36214.service: Deactivated successfully. Nov 6 00:29:52.653434 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 00:29:52.655536 systemd-logind[1847]: Removed session 12. Nov 6 00:29:52.676131 systemd[1]: Started sshd@12-172.31.28.191:22-147.75.109.163:36216.service - OpenSSH per-connection server daemon (147.75.109.163:36216). Nov 6 00:29:52.857044 sshd[5599]: Accepted publickey for core from 147.75.109.163 port 36216 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:29:52.860791 sshd-session[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:52.875667 systemd-logind[1847]: New session 13 of user core. Nov 6 00:29:52.881838 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 00:29:53.176926 sshd[5602]: Connection closed by 147.75.109.163 port 36216 Nov 6 00:29:53.177782 sshd-session[5599]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:53.186171 systemd[1]: sshd@12-172.31.28.191:22-147.75.109.163:36216.service: Deactivated successfully. Nov 6 00:29:53.190360 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 00:29:53.194810 systemd-logind[1847]: Session 13 logged out. Waiting for processes to exit. Nov 6 00:29:53.209733 systemd-logind[1847]: Removed session 13. Nov 6 00:29:53.212188 systemd[1]: Started sshd@13-172.31.28.191:22-147.75.109.163:36222.service - OpenSSH per-connection server daemon (147.75.109.163:36222). Nov 6 00:29:53.395618 sshd[5612]: Accepted publickey for core from 147.75.109.163 port 36222 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:29:53.397384 sshd-session[5612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:53.405468 systemd-logind[1847]: New session 14 of user core. Nov 6 00:29:53.411872 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 00:29:53.621668 sshd[5615]: Connection closed by 147.75.109.163 port 36222 Nov 6 00:29:53.622416 sshd-session[5612]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:53.637384 systemd[1]: sshd@13-172.31.28.191:22-147.75.109.163:36222.service: Deactivated successfully. Nov 6 00:29:53.646421 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 00:29:53.647662 systemd-logind[1847]: Session 14 logged out. Waiting for processes to exit. Nov 6 00:29:53.650427 systemd-logind[1847]: Removed session 14. Nov 6 00:29:57.683929 containerd[1886]: time="2025-11-06T00:29:57.683898372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:29:57.955637 containerd[1886]: time="2025-11-06T00:29:57.955490520Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:29:57.956758 containerd[1886]: time="2025-11-06T00:29:57.956711752Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:29:57.956940 containerd[1886]: time="2025-11-06T00:29:57.956806695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:29:57.957001 kubelet[3535]: E1106 00:29:57.956967 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:29:57.957277 kubelet[3535]: E1106 00:29:57.957015 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:29:57.957310 kubelet[3535]: E1106 00:29:57.957234 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-npwg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-h7sr5_calico-system(a5bb8dc2-5212-45e9-b678-f5085dd45c44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:29:57.958141 containerd[1886]: time="2025-11-06T00:29:57.958114287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:29:57.959189 kubelet[3535]: E1106 00:29:57.959127 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h7sr5" podUID="a5bb8dc2-5212-45e9-b678-f5085dd45c44" Nov 6 00:29:58.257566 containerd[1886]: time="2025-11-06T00:29:58.257516628Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:29:58.258914 containerd[1886]: time="2025-11-06T00:29:58.258806758Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:29:58.258914 containerd[1886]: time="2025-11-06T00:29:58.258844237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:29:58.259179 kubelet[3535]: E1106 00:29:58.259075 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:29:58.259179 kubelet[3535]: E1106 00:29:58.259129 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:29:58.259931 kubelet[3535]: E1106 00:29:58.259402 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xd652,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-645cfdc79b-jfhrj_calico-system(07d203b7-097a-40f5-a623-e80d0cafaabf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:29:58.260073 containerd[1886]: time="2025-11-06T00:29:58.259657126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:29:58.260863 kubelet[3535]: E1106 00:29:58.260829 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645cfdc79b-jfhrj" podUID="07d203b7-097a-40f5-a623-e80d0cafaabf" Nov 6 00:29:58.492596 containerd[1886]: time="2025-11-06T00:29:58.492538754Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:29:58.493808 containerd[1886]: time="2025-11-06T00:29:58.493750425Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:29:58.493956 containerd[1886]: time="2025-11-06T00:29:58.493923892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:29:58.494139 kubelet[3535]: E1106 00:29:58.494081 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:29:58.494213 kubelet[3535]: E1106 00:29:58.494139 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:29:58.494350 kubelet[3535]: E1106 00:29:58.494298 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5xsms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64c94866d7-pzj7c_calico-apiserver(683394b5-a4c6-4d59-b702-aa09246c75cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:29:58.495795 kubelet[3535]: E1106 00:29:58.495758 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c94866d7-pzj7c" podUID="683394b5-a4c6-4d59-b702-aa09246c75cb" Nov 6 00:29:58.654842 systemd[1]: Started sshd@14-172.31.28.191:22-147.75.109.163:36238.service - OpenSSH per-connection server daemon (147.75.109.163:36238). Nov 6 00:29:58.686407 containerd[1886]: time="2025-11-06T00:29:58.686226107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:29:58.848845 sshd[5635]: Accepted publickey for core from 147.75.109.163 port 36238 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:29:58.850477 sshd-session[5635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:29:58.856432 systemd-logind[1847]: New session 15 of user core. Nov 6 00:29:58.861980 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 00:29:58.934290 containerd[1886]: time="2025-11-06T00:29:58.934153641Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:29:58.935564 containerd[1886]: time="2025-11-06T00:29:58.935503610Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:29:58.935679 containerd[1886]: time="2025-11-06T00:29:58.935629245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:29:58.935896 kubelet[3535]: E1106 00:29:58.935853 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:29:58.935977 kubelet[3535]: E1106 00:29:58.935918 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:29:58.936620 containerd[1886]: time="2025-11-06T00:29:58.936424312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:29:58.948961 kubelet[3535]: E1106 00:29:58.948905 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:399009cb1ccf418794e77c19f7d21413,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vsl8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8658cddcb4-t8jhs_calico-system(06272151-9588-4a91-b3be-275f9fb7fb76): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:29:59.085395 sshd[5638]: Connection closed by 147.75.109.163 port 36238 Nov 6 00:29:59.087746 sshd-session[5635]: pam_unix(sshd:session): session closed for user core Nov 6 00:29:59.097397 systemd-logind[1847]: Session 15 logged out. Waiting for processes to exit. Nov 6 00:29:59.097574 systemd[1]: sshd@14-172.31.28.191:22-147.75.109.163:36238.service: Deactivated successfully. Nov 6 00:29:59.101396 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 00:29:59.106717 systemd-logind[1847]: Removed session 15. Nov 6 00:29:59.177786 containerd[1886]: time="2025-11-06T00:29:59.177732773Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:29:59.178847 containerd[1886]: time="2025-11-06T00:29:59.178802341Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:29:59.178934 containerd[1886]: time="2025-11-06T00:29:59.178879234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:29:59.179198 kubelet[3535]: E1106 00:29:59.179124 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:29:59.179198 kubelet[3535]: E1106 00:29:59.179174 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:29:59.179752 kubelet[3535]: E1106 00:29:59.179398 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59f95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tw7sh_calico-system(cdd556f5-82eb-470d-88d2-246c63940429): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:29:59.179902 containerd[1886]: time="2025-11-06T00:29:59.179749258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:29:59.445239 containerd[1886]: time="2025-11-06T00:29:59.445195748Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:29:59.446259 containerd[1886]: time="2025-11-06T00:29:59.446178141Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:29:59.446378 containerd[1886]: time="2025-11-06T00:29:59.446259353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:29:59.446610 kubelet[3535]: E1106 00:29:59.446537 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:29:59.446691 kubelet[3535]: E1106 00:29:59.446618 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:29:59.446890 kubelet[3535]: E1106 00:29:59.446830 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vsl8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8658cddcb4-t8jhs_calico-system(06272151-9588-4a91-b3be-275f9fb7fb76): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:29:59.447496 containerd[1886]: time="2025-11-06T00:29:59.447469351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:29:59.449001 kubelet[3535]: E1106 00:29:59.448924 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8658cddcb4-t8jhs" podUID="06272151-9588-4a91-b3be-275f9fb7fb76" Nov 6 00:29:59.681169 containerd[1886]: time="2025-11-06T00:29:59.681121110Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:29:59.682449 containerd[1886]: time="2025-11-06T00:29:59.682316409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:29:59.682449 containerd[1886]: time="2025-11-06T00:29:59.682413123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:29:59.682981 kubelet[3535]: E1106 00:29:59.682956 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:29:59.683089 kubelet[3535]: E1106 00:29:59.683075 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:29:59.683256 kubelet[3535]: E1106 00:29:59.683221 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59f95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tw7sh_calico-system(cdd556f5-82eb-470d-88d2-246c63940429): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:29:59.685525 containerd[1886]: time="2025-11-06T00:29:59.684437039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:29:59.685621 kubelet[3535]: E1106 00:29:59.684643 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tw7sh" podUID="cdd556f5-82eb-470d-88d2-246c63940429" Nov 6 00:29:59.958246 containerd[1886]: time="2025-11-06T00:29:59.958181916Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:29:59.959249 containerd[1886]: time="2025-11-06T00:29:59.959174115Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:29:59.959358 containerd[1886]: time="2025-11-06T00:29:59.959261881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:29:59.959516 kubelet[3535]: E1106 00:29:59.959432 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:29:59.959516 kubelet[3535]: E1106 00:29:59.959481 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:29:59.960017 kubelet[3535]: E1106 00:29:59.959972 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pknzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64c94866d7-nb8z8_calico-apiserver(7bcd5d84-f469-41bd-a70e-01d6d2e8ee36): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:29:59.961196 kubelet[3535]: E1106 00:29:59.961155 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c94866d7-nb8z8" podUID="7bcd5d84-f469-41bd-a70e-01d6d2e8ee36" Nov 6 00:30:04.127430 systemd[1]: Started sshd@15-172.31.28.191:22-147.75.109.163:35516.service - OpenSSH per-connection server daemon (147.75.109.163:35516). Nov 6 00:30:04.362014 sshd[5658]: Accepted publickey for core from 147.75.109.163 port 35516 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:30:04.365045 sshd-session[5658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:30:04.372709 systemd-logind[1847]: New session 16 of user core. Nov 6 00:30:04.378344 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 00:30:04.596889 sshd[5661]: Connection closed by 147.75.109.163 port 35516 Nov 6 00:30:04.597525 sshd-session[5658]: pam_unix(sshd:session): session closed for user core Nov 6 00:30:04.602620 systemd[1]: sshd@15-172.31.28.191:22-147.75.109.163:35516.service: Deactivated successfully. Nov 6 00:30:04.605404 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 00:30:04.606948 systemd-logind[1847]: Session 16 logged out. Waiting for processes to exit. Nov 6 00:30:04.609274 systemd-logind[1847]: Removed session 16. Nov 6 00:30:07.114891 containerd[1886]: time="2025-11-06T00:30:07.114693056Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ba3a6768df8709f85f94995cde03c91596109532e635673064339c81648b80a\" id:\"0be4d84fa4cf89bd974e480e589555358c09055fb8e9e399129944cb85761546\" pid:5686 exited_at:{seconds:1762389007 nanos:114289957}" Nov 6 00:30:09.629837 systemd[1]: Started sshd@16-172.31.28.191:22-147.75.109.163:35518.service - OpenSSH per-connection server daemon (147.75.109.163:35518). Nov 6 00:30:09.687978 kubelet[3535]: E1106 00:30:09.687860 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8658cddcb4-t8jhs" podUID="06272151-9588-4a91-b3be-275f9fb7fb76" Nov 6 00:30:09.819041 sshd[5699]: Accepted publickey for core from 147.75.109.163 port 35518 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:30:09.852224 sshd-session[5699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:30:09.865591 systemd-logind[1847]: New session 17 of user core. Nov 6 00:30:09.868802 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 00:30:10.189975 sshd[5702]: Connection closed by 147.75.109.163 port 35518 Nov 6 00:30:10.192691 sshd-session[5699]: pam_unix(sshd:session): session closed for user core Nov 6 00:30:10.204088 systemd-logind[1847]: Session 17 logged out. Waiting for processes to exit. Nov 6 00:30:10.204360 systemd[1]: sshd@16-172.31.28.191:22-147.75.109.163:35518.service: Deactivated successfully. Nov 6 00:30:10.207407 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 00:30:10.210010 systemd-logind[1847]: Removed session 17. Nov 6 00:30:10.689574 kubelet[3535]: E1106 00:30:10.689486 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tw7sh" podUID="cdd556f5-82eb-470d-88d2-246c63940429" Nov 6 00:30:11.685338 kubelet[3535]: E1106 00:30:11.685108 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h7sr5" podUID="a5bb8dc2-5212-45e9-b678-f5085dd45c44" Nov 6 00:30:12.684317 kubelet[3535]: E1106 00:30:12.683654 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c94866d7-pzj7c" podUID="683394b5-a4c6-4d59-b702-aa09246c75cb" Nov 6 00:30:13.684882 kubelet[3535]: E1106 00:30:13.684792 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645cfdc79b-jfhrj" podUID="07d203b7-097a-40f5-a623-e80d0cafaabf" Nov 6 00:30:15.226111 systemd[1]: Started sshd@17-172.31.28.191:22-147.75.109.163:36056.service - OpenSSH per-connection server daemon (147.75.109.163:36056). Nov 6 00:30:15.408746 sshd[5720]: Accepted publickey for core from 147.75.109.163 port 36056 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:30:15.410487 sshd-session[5720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:30:15.421247 systemd-logind[1847]: New session 18 of user core. Nov 6 00:30:15.426802 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 00:30:15.680838 sshd[5723]: Connection closed by 147.75.109.163 port 36056 Nov 6 00:30:15.682807 sshd-session[5720]: pam_unix(sshd:session): session closed for user core Nov 6 00:30:15.693702 kubelet[3535]: E1106 00:30:15.693643 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c94866d7-nb8z8" podUID="7bcd5d84-f469-41bd-a70e-01d6d2e8ee36" Nov 6 00:30:15.699441 systemd[1]: sshd@17-172.31.28.191:22-147.75.109.163:36056.service: Deactivated successfully. Nov 6 00:30:15.705184 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 00:30:15.714813 systemd-logind[1847]: Session 18 logged out. Waiting for processes to exit. Nov 6 00:30:15.734206 systemd[1]: Started sshd@18-172.31.28.191:22-147.75.109.163:36058.service - OpenSSH per-connection server daemon (147.75.109.163:36058). Nov 6 00:30:15.738225 systemd-logind[1847]: Removed session 18. Nov 6 00:30:15.933870 sshd[5735]: Accepted publickey for core from 147.75.109.163 port 36058 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:30:15.936267 sshd-session[5735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:30:15.946169 systemd-logind[1847]: New session 19 of user core. Nov 6 00:30:15.954059 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 00:30:19.527598 sshd[5738]: Connection closed by 147.75.109.163 port 36058 Nov 6 00:30:19.529236 sshd-session[5735]: pam_unix(sshd:session): session closed for user core Nov 6 00:30:19.542395 systemd[1]: sshd@18-172.31.28.191:22-147.75.109.163:36058.service: Deactivated successfully. Nov 6 00:30:19.545302 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 00:30:19.547126 systemd-logind[1847]: Session 19 logged out. Waiting for processes to exit. Nov 6 00:30:19.549467 systemd-logind[1847]: Removed session 19. Nov 6 00:30:19.563718 systemd[1]: Started sshd@19-172.31.28.191:22-147.75.109.163:36072.service - OpenSSH per-connection server daemon (147.75.109.163:36072). Nov 6 00:30:19.781142 sshd[5754]: Accepted publickey for core from 147.75.109.163 port 36072 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:30:19.783690 sshd-session[5754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:30:19.790760 systemd-logind[1847]: New session 20 of user core. Nov 6 00:30:19.800829 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 00:30:20.921067 sshd[5757]: Connection closed by 147.75.109.163 port 36072 Nov 6 00:30:20.923151 sshd-session[5754]: pam_unix(sshd:session): session closed for user core Nov 6 00:30:20.932311 systemd[1]: sshd@19-172.31.28.191:22-147.75.109.163:36072.service: Deactivated successfully. Nov 6 00:30:20.938932 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 00:30:20.943479 systemd-logind[1847]: Session 20 logged out. Waiting for processes to exit. Nov 6 00:30:20.958760 systemd[1]: Started sshd@20-172.31.28.191:22-147.75.109.163:52086.service - OpenSSH per-connection server daemon (147.75.109.163:52086). Nov 6 00:30:20.960739 systemd-logind[1847]: Removed session 20. Nov 6 00:30:21.159773 sshd[5788]: Accepted publickey for core from 147.75.109.163 port 52086 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:30:21.161196 sshd-session[5788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:30:21.166855 systemd-logind[1847]: New session 21 of user core. Nov 6 00:30:21.175879 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 00:30:22.321957 sshd[5792]: Connection closed by 147.75.109.163 port 52086 Nov 6 00:30:22.322967 sshd-session[5788]: pam_unix(sshd:session): session closed for user core Nov 6 00:30:22.328200 systemd-logind[1847]: Session 21 logged out. Waiting for processes to exit. Nov 6 00:30:22.328933 systemd[1]: sshd@20-172.31.28.191:22-147.75.109.163:52086.service: Deactivated successfully. Nov 6 00:30:22.332359 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 00:30:22.334523 systemd-logind[1847]: Removed session 21. Nov 6 00:30:22.360857 systemd[1]: Started sshd@21-172.31.28.191:22-147.75.109.163:52100.service - OpenSSH per-connection server daemon (147.75.109.163:52100). Nov 6 00:30:22.574246 sshd[5802]: Accepted publickey for core from 147.75.109.163 port 52100 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:30:22.576220 sshd-session[5802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:30:22.583031 systemd-logind[1847]: New session 22 of user core. Nov 6 00:30:22.588800 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 00:30:22.684313 containerd[1886]: time="2025-11-06T00:30:22.684032843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:30:22.928098 sshd[5805]: Connection closed by 147.75.109.163 port 52100 Nov 6 00:30:22.928968 sshd-session[5802]: pam_unix(sshd:session): session closed for user core Nov 6 00:30:22.934074 systemd[1]: sshd@21-172.31.28.191:22-147.75.109.163:52100.service: Deactivated successfully. Nov 6 00:30:22.934134 systemd-logind[1847]: Session 22 logged out. Waiting for processes to exit. Nov 6 00:30:22.936951 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 00:30:22.939892 systemd-logind[1847]: Removed session 22. Nov 6 00:30:23.032740 containerd[1886]: time="2025-11-06T00:30:23.032614262Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:23.034395 containerd[1886]: time="2025-11-06T00:30:23.034304418Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:30:23.034567 containerd[1886]: time="2025-11-06T00:30:23.034330631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:30:23.037840 kubelet[3535]: E1106 00:30:23.037770 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:30:23.038875 kubelet[3535]: E1106 00:30:23.037853 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:30:23.038875 kubelet[3535]: E1106 00:30:23.038041 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:399009cb1ccf418794e77c19f7d21413,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vsl8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8658cddcb4-t8jhs_calico-system(06272151-9588-4a91-b3be-275f9fb7fb76): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:23.040609 containerd[1886]: time="2025-11-06T00:30:23.040379144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:30:23.341941 containerd[1886]: time="2025-11-06T00:30:23.341870576Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:23.343321 containerd[1886]: time="2025-11-06T00:30:23.343264166Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:30:23.343475 containerd[1886]: time="2025-11-06T00:30:23.343362262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:30:23.343585 kubelet[3535]: E1106 00:30:23.343535 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:30:23.343688 kubelet[3535]: E1106 00:30:23.343627 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:30:23.343835 kubelet[3535]: E1106 00:30:23.343781 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vsl8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8658cddcb4-t8jhs_calico-system(06272151-9588-4a91-b3be-275f9fb7fb76): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:23.345428 kubelet[3535]: E1106 00:30:23.345356 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8658cddcb4-t8jhs" podUID="06272151-9588-4a91-b3be-275f9fb7fb76" Nov 6 00:30:24.683851 containerd[1886]: time="2025-11-06T00:30:24.683558032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:30:24.977191 containerd[1886]: time="2025-11-06T00:30:24.977123365Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:24.979265 containerd[1886]: time="2025-11-06T00:30:24.979216115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:30:24.979265 containerd[1886]: time="2025-11-06T00:30:24.979250933Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:30:24.979566 kubelet[3535]: E1106 00:30:24.979487 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:30:24.979566 kubelet[3535]: E1106 00:30:24.979530 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:30:24.980081 containerd[1886]: time="2025-11-06T00:30:24.979847808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:30:24.980299 kubelet[3535]: E1106 00:30:24.980218 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59f95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tw7sh_calico-system(cdd556f5-82eb-470d-88d2-246c63940429): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:25.302435 containerd[1886]: time="2025-11-06T00:30:25.302301479Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:25.303440 containerd[1886]: time="2025-11-06T00:30:25.303388271Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:30:25.303574 containerd[1886]: time="2025-11-06T00:30:25.303399442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:30:25.303658 kubelet[3535]: E1106 00:30:25.303625 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:30:25.339906 kubelet[3535]: E1106 00:30:25.303674 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:30:25.339906 kubelet[3535]: E1106 00:30:25.303884 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-npwg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-h7sr5_calico-system(a5bb8dc2-5212-45e9-b678-f5085dd45c44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:25.339906 kubelet[3535]: E1106 00:30:25.305871 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h7sr5" podUID="a5bb8dc2-5212-45e9-b678-f5085dd45c44" Nov 6 00:30:25.340111 containerd[1886]: time="2025-11-06T00:30:25.304377276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:30:25.714090 containerd[1886]: time="2025-11-06T00:30:25.713960124Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:25.715139 containerd[1886]: time="2025-11-06T00:30:25.715071903Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:30:25.715239 containerd[1886]: time="2025-11-06T00:30:25.715175281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:30:25.715492 kubelet[3535]: E1106 00:30:25.715450 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:30:25.715562 kubelet[3535]: E1106 00:30:25.715499 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:30:25.715848 kubelet[3535]: E1106 00:30:25.715763 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59f95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tw7sh_calico-system(cdd556f5-82eb-470d-88d2-246c63940429): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:25.716628 containerd[1886]: time="2025-11-06T00:30:25.716252177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:30:25.717688 kubelet[3535]: E1106 00:30:25.717442 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tw7sh" podUID="cdd556f5-82eb-470d-88d2-246c63940429" Nov 6 00:30:25.986127 containerd[1886]: time="2025-11-06T00:30:25.986062502Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:25.987257 containerd[1886]: time="2025-11-06T00:30:25.987178447Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:30:25.987372 containerd[1886]: time="2025-11-06T00:30:25.987268545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:30:25.987488 kubelet[3535]: E1106 00:30:25.987443 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:30:25.987819 kubelet[3535]: E1106 00:30:25.987503 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:30:25.987819 kubelet[3535]: E1106 00:30:25.987735 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5xsms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64c94866d7-pzj7c_calico-apiserver(683394b5-a4c6-4d59-b702-aa09246c75cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:25.989823 kubelet[3535]: E1106 00:30:25.989724 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c94866d7-pzj7c" podUID="683394b5-a4c6-4d59-b702-aa09246c75cb" Nov 6 00:30:26.683726 containerd[1886]: time="2025-11-06T00:30:26.683246412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:30:26.982441 containerd[1886]: time="2025-11-06T00:30:26.982010140Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:26.983399 containerd[1886]: time="2025-11-06T00:30:26.983281062Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:30:26.983399 containerd[1886]: time="2025-11-06T00:30:26.983367176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:30:26.983613 kubelet[3535]: E1106 00:30:26.983562 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:30:26.983721 kubelet[3535]: E1106 00:30:26.983700 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:30:26.983891 kubelet[3535]: E1106 00:30:26.983844 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pknzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64c94866d7-nb8z8_calico-apiserver(7bcd5d84-f469-41bd-a70e-01d6d2e8ee36): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:26.985360 kubelet[3535]: E1106 00:30:26.985314 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c94866d7-nb8z8" podUID="7bcd5d84-f469-41bd-a70e-01d6d2e8ee36" Nov 6 00:30:27.961527 systemd[1]: Started sshd@22-172.31.28.191:22-147.75.109.163:52114.service - OpenSSH per-connection server daemon (147.75.109.163:52114). Nov 6 00:30:28.132244 sshd[5821]: Accepted publickey for core from 147.75.109.163 port 52114 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:30:28.133870 sshd-session[5821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:30:28.139755 systemd-logind[1847]: New session 23 of user core. Nov 6 00:30:28.145886 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 6 00:30:28.361456 sshd[5826]: Connection closed by 147.75.109.163 port 52114 Nov 6 00:30:28.363478 sshd-session[5821]: pam_unix(sshd:session): session closed for user core Nov 6 00:30:28.367646 systemd-logind[1847]: Session 23 logged out. Waiting for processes to exit. Nov 6 00:30:28.368278 systemd[1]: sshd@22-172.31.28.191:22-147.75.109.163:52114.service: Deactivated successfully. Nov 6 00:30:28.370529 systemd[1]: session-23.scope: Deactivated successfully. Nov 6 00:30:28.379838 systemd-logind[1847]: Removed session 23. Nov 6 00:30:28.683064 containerd[1886]: time="2025-11-06T00:30:28.682894725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:30:29.593446 containerd[1886]: time="2025-11-06T00:30:29.593399590Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:30:29.597434 containerd[1886]: time="2025-11-06T00:30:29.597367194Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:30:29.597434 containerd[1886]: time="2025-11-06T00:30:29.597385840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:30:29.597924 kubelet[3535]: E1106 00:30:29.597806 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:30:29.597924 kubelet[3535]: E1106 00:30:29.597859 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:30:29.598328 kubelet[3535]: E1106 00:30:29.598049 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xd652,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-645cfdc79b-jfhrj_calico-system(07d203b7-097a-40f5-a623-e80d0cafaabf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:30:29.599710 kubelet[3535]: E1106 00:30:29.599659 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645cfdc79b-jfhrj" podUID="07d203b7-097a-40f5-a623-e80d0cafaabf" Nov 6 00:30:33.402451 systemd[1]: Started sshd@23-172.31.28.191:22-147.75.109.163:36440.service - OpenSSH per-connection server daemon (147.75.109.163:36440). Nov 6 00:30:33.596697 sshd[5838]: Accepted publickey for core from 147.75.109.163 port 36440 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:30:33.600115 sshd-session[5838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:30:33.613296 systemd-logind[1847]: New session 24 of user core. Nov 6 00:30:33.619798 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 6 00:30:33.981607 sshd[5841]: Connection closed by 147.75.109.163 port 36440 Nov 6 00:30:33.982353 sshd-session[5838]: pam_unix(sshd:session): session closed for user core Nov 6 00:30:33.990228 systemd[1]: sshd@23-172.31.28.191:22-147.75.109.163:36440.service: Deactivated successfully. Nov 6 00:30:33.995502 systemd[1]: session-24.scope: Deactivated successfully. Nov 6 00:30:33.997559 systemd-logind[1847]: Session 24 logged out. Waiting for processes to exit. Nov 6 00:30:34.001642 systemd-logind[1847]: Removed session 24. Nov 6 00:30:37.072543 containerd[1886]: time="2025-11-06T00:30:37.072485202Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ba3a6768df8709f85f94995cde03c91596109532e635673064339c81648b80a\" id:\"53efd1d8b2bb34f51a23a5d1bba7dd1dc675d64a996b49691b7278341b336aea\" pid:5865 exited_at:{seconds:1762389037 nanos:56365183}" Nov 6 00:30:38.683349 kubelet[3535]: E1106 00:30:38.683129 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c94866d7-pzj7c" podUID="683394b5-a4c6-4d59-b702-aa09246c75cb" Nov 6 00:30:38.686343 kubelet[3535]: E1106 00:30:38.686272 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8658cddcb4-t8jhs" podUID="06272151-9588-4a91-b3be-275f9fb7fb76" Nov 6 00:30:39.014308 systemd[1]: Started sshd@24-172.31.28.191:22-147.75.109.163:36454.service - OpenSSH per-connection server daemon (147.75.109.163:36454). Nov 6 00:30:39.216297 sshd[5879]: Accepted publickey for core from 147.75.109.163 port 36454 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:30:39.218679 sshd-session[5879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:30:39.226181 systemd-logind[1847]: New session 25 of user core. Nov 6 00:30:39.232466 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 6 00:30:39.640615 sshd[5882]: Connection closed by 147.75.109.163 port 36454 Nov 6 00:30:39.640300 sshd-session[5879]: pam_unix(sshd:session): session closed for user core Nov 6 00:30:39.647604 systemd-logind[1847]: Session 25 logged out. Waiting for processes to exit. Nov 6 00:30:39.651390 systemd[1]: sshd@24-172.31.28.191:22-147.75.109.163:36454.service: Deactivated successfully. Nov 6 00:30:39.656380 systemd[1]: session-25.scope: Deactivated successfully. Nov 6 00:30:39.662604 systemd-logind[1847]: Removed session 25. Nov 6 00:30:39.683679 kubelet[3535]: E1106 00:30:39.683621 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h7sr5" podUID="a5bb8dc2-5212-45e9-b678-f5085dd45c44" Nov 6 00:30:39.686380 kubelet[3535]: E1106 00:30:39.685398 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tw7sh" podUID="cdd556f5-82eb-470d-88d2-246c63940429" Nov 6 00:30:41.686874 kubelet[3535]: E1106 00:30:41.686676 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c94866d7-nb8z8" podUID="7bcd5d84-f469-41bd-a70e-01d6d2e8ee36" Nov 6 00:30:41.686874 kubelet[3535]: E1106 00:30:41.686747 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645cfdc79b-jfhrj" podUID="07d203b7-097a-40f5-a623-e80d0cafaabf" Nov 6 00:30:44.677421 systemd[1]: Started sshd@25-172.31.28.191:22-147.75.109.163:55222.service - OpenSSH per-connection server daemon (147.75.109.163:55222). Nov 6 00:30:44.877383 sshd[5896]: Accepted publickey for core from 147.75.109.163 port 55222 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:30:44.881285 sshd-session[5896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:30:44.891350 systemd-logind[1847]: New session 26 of user core. Nov 6 00:30:44.897788 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 6 00:30:45.151837 sshd[5899]: Connection closed by 147.75.109.163 port 55222 Nov 6 00:30:45.153843 sshd-session[5896]: pam_unix(sshd:session): session closed for user core Nov 6 00:30:45.160554 systemd[1]: sshd@25-172.31.28.191:22-147.75.109.163:55222.service: Deactivated successfully. Nov 6 00:30:45.165417 systemd[1]: session-26.scope: Deactivated successfully. Nov 6 00:30:45.169960 systemd-logind[1847]: Session 26 logged out. Waiting for processes to exit. Nov 6 00:30:45.171409 systemd-logind[1847]: Removed session 26. Nov 6 00:30:50.190848 systemd[1]: Started sshd@26-172.31.28.191:22-147.75.109.163:38756.service - OpenSSH per-connection server daemon (147.75.109.163:38756). Nov 6 00:30:50.402910 sshd[5911]: Accepted publickey for core from 147.75.109.163 port 38756 ssh2: RSA SHA256:Deh/cOd523FFskQml7R02KLq0LH0zYpAbPnLB155Ov8 Nov 6 00:30:50.405168 sshd-session[5911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:30:50.412809 systemd-logind[1847]: New session 27 of user core. Nov 6 00:30:50.417972 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 6 00:30:51.172143 sshd[5915]: Connection closed by 147.75.109.163 port 38756 Nov 6 00:30:51.173805 sshd-session[5911]: pam_unix(sshd:session): session closed for user core Nov 6 00:30:51.179547 systemd[1]: sshd@26-172.31.28.191:22-147.75.109.163:38756.service: Deactivated successfully. Nov 6 00:30:51.184554 systemd[1]: session-27.scope: Deactivated successfully. Nov 6 00:30:51.186829 systemd-logind[1847]: Session 27 logged out. Waiting for processes to exit. Nov 6 00:30:51.190297 systemd-logind[1847]: Removed session 27. Nov 6 00:30:51.690258 kubelet[3535]: E1106 00:30:51.689831 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c94866d7-pzj7c" podUID="683394b5-a4c6-4d59-b702-aa09246c75cb" Nov 6 00:30:53.686854 kubelet[3535]: E1106 00:30:53.686764 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8658cddcb4-t8jhs" podUID="06272151-9588-4a91-b3be-275f9fb7fb76" Nov 6 00:30:53.687758 kubelet[3535]: E1106 00:30:53.686876 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c94866d7-nb8z8" podUID="7bcd5d84-f469-41bd-a70e-01d6d2e8ee36" Nov 6 00:30:53.687758 kubelet[3535]: E1106 00:30:53.687669 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tw7sh" podUID="cdd556f5-82eb-470d-88d2-246c63940429" Nov 6 00:30:54.683205 kubelet[3535]: E1106 00:30:54.682899 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h7sr5" podUID="a5bb8dc2-5212-45e9-b678-f5085dd45c44" Nov 6 00:30:56.682618 kubelet[3535]: E1106 00:30:56.682543 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645cfdc79b-jfhrj" podUID="07d203b7-097a-40f5-a623-e80d0cafaabf" Nov 6 00:31:04.466762 systemd[1]: cri-containerd-dd5a43c47dd10683aa96b8d8370b0e4082ca96b42f6a148b5b8dd049a5c01c04.scope: Deactivated successfully. Nov 6 00:31:04.467486 systemd[1]: cri-containerd-dd5a43c47dd10683aa96b8d8370b0e4082ca96b42f6a148b5b8dd049a5c01c04.scope: Consumed 11.746s CPU time, 113.1M memory peak, 47.9M read from disk. Nov 6 00:31:04.471684 containerd[1886]: time="2025-11-06T00:31:04.471646066Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd5a43c47dd10683aa96b8d8370b0e4082ca96b42f6a148b5b8dd049a5c01c04\" id:\"dd5a43c47dd10683aa96b8d8370b0e4082ca96b42f6a148b5b8dd049a5c01c04\" pid:3857 exit_status:1 exited_at:{seconds:1762389064 nanos:470950212}" Nov 6 00:31:04.480320 containerd[1886]: time="2025-11-06T00:31:04.480239271Z" level=info msg="received exit event container_id:\"dd5a43c47dd10683aa96b8d8370b0e4082ca96b42f6a148b5b8dd049a5c01c04\" id:\"dd5a43c47dd10683aa96b8d8370b0e4082ca96b42f6a148b5b8dd049a5c01c04\" pid:3857 exit_status:1 exited_at:{seconds:1762389064 nanos:470950212}" Nov 6 00:31:04.615865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd5a43c47dd10683aa96b8d8370b0e4082ca96b42f6a148b5b8dd049a5c01c04-rootfs.mount: Deactivated successfully. Nov 6 00:31:04.836490 kubelet[3535]: E1106 00:31:04.836408 3535 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-191?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 6 00:31:05.355399 kubelet[3535]: I1106 00:31:05.355330 3535 scope.go:117] "RemoveContainer" containerID="dd5a43c47dd10683aa96b8d8370b0e4082ca96b42f6a148b5b8dd049a5c01c04" Nov 6 00:31:05.375381 containerd[1886]: time="2025-11-06T00:31:05.375319846Z" level=info msg="CreateContainer within sandbox \"2dd07ba97e3eaf2950efdfb7da22a666ee1a945d92e0ca1bcffcce32c773c83b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 6 00:31:05.459072 containerd[1886]: time="2025-11-06T00:31:05.458716221Z" level=info msg="Container 544eb8faaf37eaae740566b43ad63cda3bf1c8e08c0724c77ff5b814c1be3444: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:31:05.484725 containerd[1886]: time="2025-11-06T00:31:05.484649936Z" level=info msg="CreateContainer within sandbox \"2dd07ba97e3eaf2950efdfb7da22a666ee1a945d92e0ca1bcffcce32c773c83b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"544eb8faaf37eaae740566b43ad63cda3bf1c8e08c0724c77ff5b814c1be3444\"" Nov 6 00:31:05.485613 containerd[1886]: time="2025-11-06T00:31:05.485390892Z" level=info msg="StartContainer for \"544eb8faaf37eaae740566b43ad63cda3bf1c8e08c0724c77ff5b814c1be3444\"" Nov 6 00:31:05.486362 containerd[1886]: time="2025-11-06T00:31:05.486316900Z" level=info msg="connecting to shim 544eb8faaf37eaae740566b43ad63cda3bf1c8e08c0724c77ff5b814c1be3444" address="unix:///run/containerd/s/817bacdd642494d9791ea02a56f1cb45b3345e5a1aaea8bea78cdf928948c3d6" protocol=ttrpc version=3 Nov 6 00:31:05.513617 systemd[1]: Started cri-containerd-544eb8faaf37eaae740566b43ad63cda3bf1c8e08c0724c77ff5b814c1be3444.scope - libcontainer container 544eb8faaf37eaae740566b43ad63cda3bf1c8e08c0724c77ff5b814c1be3444. Nov 6 00:31:05.578829 containerd[1886]: time="2025-11-06T00:31:05.578776985Z" level=info msg="StartContainer for \"544eb8faaf37eaae740566b43ad63cda3bf1c8e08c0724c77ff5b814c1be3444\" returns successfully" Nov 6 00:31:05.655940 systemd[1]: cri-containerd-33ae8ddfa4c81dc18b0807d475ebe4f103cf9f13d77ebc8adb290e1bec26f779.scope: Deactivated successfully. Nov 6 00:31:05.656220 systemd[1]: cri-containerd-33ae8ddfa4c81dc18b0807d475ebe4f103cf9f13d77ebc8adb290e1bec26f779.scope: Consumed 4.912s CPU time, 90.3M memory peak, 63.3M read from disk. Nov 6 00:31:05.660377 containerd[1886]: time="2025-11-06T00:31:05.660299551Z" level=info msg="received exit event container_id:\"33ae8ddfa4c81dc18b0807d475ebe4f103cf9f13d77ebc8adb290e1bec26f779\" id:\"33ae8ddfa4c81dc18b0807d475ebe4f103cf9f13d77ebc8adb290e1bec26f779\" pid:3097 exit_status:1 exited_at:{seconds:1762389065 nanos:660005127}" Nov 6 00:31:05.660995 containerd[1886]: time="2025-11-06T00:31:05.660951100Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33ae8ddfa4c81dc18b0807d475ebe4f103cf9f13d77ebc8adb290e1bec26f779\" id:\"33ae8ddfa4c81dc18b0807d475ebe4f103cf9f13d77ebc8adb290e1bec26f779\" pid:3097 exit_status:1 exited_at:{seconds:1762389065 nanos:660005127}" Nov 6 00:31:05.685264 containerd[1886]: time="2025-11-06T00:31:05.685205713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:31:05.697993 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33ae8ddfa4c81dc18b0807d475ebe4f103cf9f13d77ebc8adb290e1bec26f779-rootfs.mount: Deactivated successfully. Nov 6 00:31:05.942803 containerd[1886]: time="2025-11-06T00:31:05.942506457Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:31:05.944972 containerd[1886]: time="2025-11-06T00:31:05.944838364Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:31:05.944972 containerd[1886]: time="2025-11-06T00:31:05.944883329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:31:05.945607 kubelet[3535]: E1106 00:31:05.945497 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:31:05.946201 kubelet[3535]: E1106 00:31:05.945654 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:31:05.946201 kubelet[3535]: E1106 00:31:05.945950 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-npwg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-h7sr5_calico-system(a5bb8dc2-5212-45e9-b678-f5085dd45c44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:31:05.948047 kubelet[3535]: E1106 00:31:05.947988 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h7sr5" podUID="a5bb8dc2-5212-45e9-b678-f5085dd45c44" Nov 6 00:31:06.371625 kubelet[3535]: I1106 00:31:06.371595 3535 scope.go:117] "RemoveContainer" containerID="33ae8ddfa4c81dc18b0807d475ebe4f103cf9f13d77ebc8adb290e1bec26f779" Nov 6 00:31:06.375800 containerd[1886]: time="2025-11-06T00:31:06.375750217Z" level=info msg="CreateContainer within sandbox \"50b1bc9a9f834a198208b18035d883f2e732a4d05338c6dbb633612b63ba47aa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 6 00:31:06.470308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2827417011.mount: Deactivated successfully. Nov 6 00:31:06.475244 containerd[1886]: time="2025-11-06T00:31:06.475208453Z" level=info msg="Container 13a9ba2b18d226103d3edbc1320d349ae96360e1ac93d1b6826102cb319a24e0: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:31:06.482010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3957112317.mount: Deactivated successfully. Nov 6 00:31:06.514769 containerd[1886]: time="2025-11-06T00:31:06.514717718Z" level=info msg="CreateContainer within sandbox \"50b1bc9a9f834a198208b18035d883f2e732a4d05338c6dbb633612b63ba47aa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"13a9ba2b18d226103d3edbc1320d349ae96360e1ac93d1b6826102cb319a24e0\"" Nov 6 00:31:06.515632 containerd[1886]: time="2025-11-06T00:31:06.515256151Z" level=info msg="StartContainer for \"13a9ba2b18d226103d3edbc1320d349ae96360e1ac93d1b6826102cb319a24e0\"" Nov 6 00:31:06.516600 containerd[1886]: time="2025-11-06T00:31:06.516527400Z" level=info msg="connecting to shim 13a9ba2b18d226103d3edbc1320d349ae96360e1ac93d1b6826102cb319a24e0" address="unix:///run/containerd/s/15b24b360a537c9902625d97a50858fb60855f59640f745a3fa22854fbb5e24c" protocol=ttrpc version=3 Nov 6 00:31:06.540077 systemd[1]: Started cri-containerd-13a9ba2b18d226103d3edbc1320d349ae96360e1ac93d1b6826102cb319a24e0.scope - libcontainer container 13a9ba2b18d226103d3edbc1320d349ae96360e1ac93d1b6826102cb319a24e0. Nov 6 00:31:06.612698 containerd[1886]: time="2025-11-06T00:31:06.612652881Z" level=info msg="StartContainer for \"13a9ba2b18d226103d3edbc1320d349ae96360e1ac93d1b6826102cb319a24e0\" returns successfully" Nov 6 00:31:06.683789 containerd[1886]: time="2025-11-06T00:31:06.683454980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:31:06.862126 update_engine[1848]: I20251106 00:31:06.861869 1848 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 6 00:31:06.862126 update_engine[1848]: I20251106 00:31:06.862048 1848 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 6 00:31:06.864559 update_engine[1848]: I20251106 00:31:06.864517 1848 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 6 00:31:06.865932 update_engine[1848]: I20251106 00:31:06.865885 1848 omaha_request_params.cc:62] Current group set to beta Nov 6 00:31:06.867079 update_engine[1848]: I20251106 00:31:06.866022 1848 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 6 00:31:06.867079 update_engine[1848]: I20251106 00:31:06.866037 1848 update_attempter.cc:643] Scheduling an action processor start. Nov 6 00:31:06.867079 update_engine[1848]: I20251106 00:31:06.866056 1848 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 6 00:31:06.867079 update_engine[1848]: I20251106 00:31:06.866096 1848 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 6 00:31:06.867079 update_engine[1848]: I20251106 00:31:06.866158 1848 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 6 00:31:06.867079 update_engine[1848]: I20251106 00:31:06.866165 1848 omaha_request_action.cc:272] Request: Nov 6 00:31:06.867079 update_engine[1848]: Nov 6 00:31:06.867079 update_engine[1848]: Nov 6 00:31:06.867079 update_engine[1848]: Nov 6 00:31:06.867079 update_engine[1848]: Nov 6 00:31:06.867079 update_engine[1848]: Nov 6 00:31:06.867079 update_engine[1848]: Nov 6 00:31:06.867079 update_engine[1848]: Nov 6 00:31:06.867079 update_engine[1848]: Nov 6 00:31:06.867079 update_engine[1848]: I20251106 00:31:06.866171 1848 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 6 00:31:06.882554 update_engine[1848]: I20251106 00:31:06.881749 1848 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 6 00:31:06.882554 update_engine[1848]: I20251106 00:31:06.882432 1848 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 6 00:31:06.894108 locksmithd[1899]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 6 00:31:06.913218 update_engine[1848]: E20251106 00:31:06.912897 1848 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 6 00:31:06.913218 update_engine[1848]: I20251106 00:31:06.913169 1848 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 6 00:31:06.949572 containerd[1886]: time="2025-11-06T00:31:06.949441659Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:31:06.951646 containerd[1886]: time="2025-11-06T00:31:06.951602009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:31:06.951811 containerd[1886]: time="2025-11-06T00:31:06.951635081Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:31:06.951931 kubelet[3535]: E1106 00:31:06.951894 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:31:06.952301 kubelet[3535]: E1106 00:31:06.951948 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:31:06.952301 kubelet[3535]: E1106 00:31:06.952208 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59f95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tw7sh_calico-system(cdd556f5-82eb-470d-88d2-246c63940429): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:31:06.952771 containerd[1886]: time="2025-11-06T00:31:06.952742911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:31:07.004211 containerd[1886]: time="2025-11-06T00:31:07.004164432Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ba3a6768df8709f85f94995cde03c91596109532e635673064339c81648b80a\" id:\"4353425af5f0296b606c1f5acd3dcb982efea7f8bc57e3d3976fa23cca860f1d\" pid:6035 exited_at:{seconds:1762389067 nanos:3798197}" Nov 6 00:31:07.257326 containerd[1886]: time="2025-11-06T00:31:07.257270190Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:31:07.259408 containerd[1886]: time="2025-11-06T00:31:07.259348132Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:31:07.259557 containerd[1886]: time="2025-11-06T00:31:07.259431171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:31:07.259638 kubelet[3535]: E1106 00:31:07.259603 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:31:07.259682 kubelet[3535]: E1106 00:31:07.259647 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:31:07.260220 containerd[1886]: time="2025-11-06T00:31:07.259917008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:31:07.260301 kubelet[3535]: E1106 00:31:07.259904 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5xsms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64c94866d7-pzj7c_calico-apiserver(683394b5-a4c6-4d59-b702-aa09246c75cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:31:07.261739 kubelet[3535]: E1106 00:31:07.261692 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c94866d7-pzj7c" podUID="683394b5-a4c6-4d59-b702-aa09246c75cb" Nov 6 00:31:07.554925 containerd[1886]: time="2025-11-06T00:31:07.554781026Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:31:07.556930 containerd[1886]: time="2025-11-06T00:31:07.556877585Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:31:07.557564 containerd[1886]: time="2025-11-06T00:31:07.556972908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:31:07.557634 kubelet[3535]: E1106 00:31:07.557276 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:31:07.557634 kubelet[3535]: E1106 00:31:07.557324 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:31:07.557634 kubelet[3535]: E1106 00:31:07.557522 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:399009cb1ccf418794e77c19f7d21413,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vsl8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8658cddcb4-t8jhs_calico-system(06272151-9588-4a91-b3be-275f9fb7fb76): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:31:07.558124 containerd[1886]: time="2025-11-06T00:31:07.558092048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:31:07.686350 kubelet[3535]: E1106 00:31:07.686088 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645cfdc79b-jfhrj" podUID="07d203b7-097a-40f5-a623-e80d0cafaabf" Nov 6 00:31:07.823872 containerd[1886]: time="2025-11-06T00:31:07.823747232Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:31:07.826152 containerd[1886]: time="2025-11-06T00:31:07.826086996Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:31:07.826292 containerd[1886]: time="2025-11-06T00:31:07.826218936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:31:07.826484 kubelet[3535]: E1106 00:31:07.826435 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:31:07.826547 kubelet[3535]: E1106 00:31:07.826530 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:31:07.827321 kubelet[3535]: E1106 00:31:07.826965 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59f95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tw7sh_calico-system(cdd556f5-82eb-470d-88d2-246c63940429): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:31:07.827574 containerd[1886]: time="2025-11-06T00:31:07.827550081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:31:07.828710 kubelet[3535]: E1106 00:31:07.828635 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tw7sh" podUID="cdd556f5-82eb-470d-88d2-246c63940429" Nov 6 00:31:08.130452 containerd[1886]: time="2025-11-06T00:31:08.130325385Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:31:08.132854 containerd[1886]: time="2025-11-06T00:31:08.132758681Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:31:08.132999 containerd[1886]: time="2025-11-06T00:31:08.132778668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:31:08.133266 kubelet[3535]: E1106 00:31:08.133222 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:31:08.133680 kubelet[3535]: E1106 00:31:08.133280 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:31:08.133680 kubelet[3535]: E1106 00:31:08.133432 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vsl8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8658cddcb4-t8jhs_calico-system(06272151-9588-4a91-b3be-275f9fb7fb76): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:31:08.134706 kubelet[3535]: E1106 00:31:08.134651 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8658cddcb4-t8jhs" podUID="06272151-9588-4a91-b3be-275f9fb7fb76" Nov 6 00:31:08.683815 containerd[1886]: time="2025-11-06T00:31:08.683777608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:31:08.964266 containerd[1886]: time="2025-11-06T00:31:08.963966188Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:31:08.966348 containerd[1886]: time="2025-11-06T00:31:08.966177278Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:31:08.966348 containerd[1886]: time="2025-11-06T00:31:08.966284042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:31:08.966787 kubelet[3535]: E1106 00:31:08.966749 3535 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:31:08.966961 kubelet[3535]: E1106 00:31:08.966910 3535 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:31:08.967291 kubelet[3535]: E1106 00:31:08.967212 3535 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pknzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64c94866d7-nb8z8_calico-apiserver(7bcd5d84-f469-41bd-a70e-01d6d2e8ee36): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:31:08.968471 kubelet[3535]: E1106 00:31:08.968402 3535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c94866d7-nb8z8" podUID="7bcd5d84-f469-41bd-a70e-01d6d2e8ee36" Nov 6 00:31:09.973492 systemd[1]: cri-containerd-bedb85a4bb61659ca5b936d6c749fa645c018228047699c94af0b237db14247d.scope: Deactivated successfully. Nov 6 00:31:09.973872 systemd[1]: cri-containerd-bedb85a4bb61659ca5b936d6c749fa645c018228047699c94af0b237db14247d.scope: Consumed 3.307s CPU time, 40M memory peak, 35.6M read from disk. Nov 6 00:31:09.977962 containerd[1886]: time="2025-11-06T00:31:09.977918566Z" level=info msg="received exit event container_id:\"bedb85a4bb61659ca5b936d6c749fa645c018228047699c94af0b237db14247d\" id:\"bedb85a4bb61659ca5b936d6c749fa645c018228047699c94af0b237db14247d\" pid:3109 exit_status:1 exited_at:{seconds:1762389069 nanos:977575294}" Nov 6 00:31:09.978712 containerd[1886]: time="2025-11-06T00:31:09.978653778Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bedb85a4bb61659ca5b936d6c749fa645c018228047699c94af0b237db14247d\" id:\"bedb85a4bb61659ca5b936d6c749fa645c018228047699c94af0b237db14247d\" pid:3109 exit_status:1 exited_at:{seconds:1762389069 nanos:977575294}" Nov 6 00:31:10.009554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bedb85a4bb61659ca5b936d6c749fa645c018228047699c94af0b237db14247d-rootfs.mount: Deactivated successfully. Nov 6 00:31:10.393599 kubelet[3535]: I1106 00:31:10.393530 3535 scope.go:117] "RemoveContainer" containerID="bedb85a4bb61659ca5b936d6c749fa645c018228047699c94af0b237db14247d" Nov 6 00:31:10.398427 containerd[1886]: time="2025-11-06T00:31:10.397726363Z" level=info msg="CreateContainer within sandbox \"c5a2c0a31c2abf7d644ed8fe304e54fe6ab24ea565385593defefd80bb4351e9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 6 00:31:10.420648 containerd[1886]: time="2025-11-06T00:31:10.420595721Z" level=info msg="Container 6ab7e2de236b0b7abe33058032ebcc7cf44cba19846131d48d95e5f8fdc1f2ff: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:31:10.457701 containerd[1886]: time="2025-11-06T00:31:10.457649958Z" level=info msg="CreateContainer within sandbox \"c5a2c0a31c2abf7d644ed8fe304e54fe6ab24ea565385593defefd80bb4351e9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"6ab7e2de236b0b7abe33058032ebcc7cf44cba19846131d48d95e5f8fdc1f2ff\"" Nov 6 00:31:10.459728 containerd[1886]: time="2025-11-06T00:31:10.458459536Z" level=info msg="StartContainer for \"6ab7e2de236b0b7abe33058032ebcc7cf44cba19846131d48d95e5f8fdc1f2ff\"" Nov 6 00:31:10.460001 containerd[1886]: time="2025-11-06T00:31:10.459965311Z" level=info msg="connecting to shim 6ab7e2de236b0b7abe33058032ebcc7cf44cba19846131d48d95e5f8fdc1f2ff" address="unix:///run/containerd/s/5382e775e8df1c6775a2262aaac038b4cfbed96865fc2be8e5d79861d3d7034b" protocol=ttrpc version=3 Nov 6 00:31:10.496053 systemd[1]: Started cri-containerd-6ab7e2de236b0b7abe33058032ebcc7cf44cba19846131d48d95e5f8fdc1f2ff.scope - libcontainer container 6ab7e2de236b0b7abe33058032ebcc7cf44cba19846131d48d95e5f8fdc1f2ff. Nov 6 00:31:10.567891 containerd[1886]: time="2025-11-06T00:31:10.567847538Z" level=info msg="StartContainer for \"6ab7e2de236b0b7abe33058032ebcc7cf44cba19846131d48d95e5f8fdc1f2ff\" returns successfully" Nov 6 00:31:14.837489 kubelet[3535]: E1106 00:31:14.837362 3535 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-191?timeout=10s\": context deadline exceeded"